Can somebody who read the paper in depth comment on whether the result is practical or not? Since it's a TCS paper, it's hard to gauge whether the constant factors are palatable or not, and there are many theoretical CS papers that give asymptotically optimal or significantly improved algorithms that have astronomical constant factors (e.g. matrix multiplication, etc.)

It will take me a while to read the paper. I saw there is no code inside. Do they have code that improves LZ parsing as used in all general purpose compressors? If they don't have code, can their algorithm be implemented to improve performance of zstd or lzma?

I used to play with suffix arrays a long time ago. I wanted to accelerate grep on a gigabyte text file. The tool was called "sary" (short for suffix array) and still exists on a forgotten SourceForce page. Good tool, it was able to find any substring in a huge file instantly.

As a result, I would hesitate before using "ego depletion" as an excuse for rationalizing a lack of self-control (eg. giving into "cheat foods" or being irritable/impulsive). Whether or not "ego depletion" is real, science has not yet adequately validated the theory.

Moreover, there is a risk to accepting the theory as true: because one believes in "ego depletion," one can rationalize a lower degree of self-control, which may have been higher otherwise. This creates a self-fulfilling prophecy.

I think it is fair to assume that, given the current research, "ego depletion" is no more than a reasonable hypothesis. It is possible that willpower may not fit the "finite resource" model at all.

I find that I will lose and gain focus throughout the day when I have a task when I don't know what to do next. When I do have energy I will try to push past it. After awhile though I start reading HN. The amount time I will read HN depends on an ever decreasing amount of energy. After about 3-5 times of losing focus and not making progress I will spend increasingly longer on the site.

From the website I am practicing mindfulness throughout the day since I have my Apple watch (through the breath app). I think mindfulness would be good for hackers. On physical tasks you could lift weights but on mental tasks your goal is probably reducing anxiety or frustration. Or preserving flow (Podmoro supposedly does this but I could never get into it).

On the other hand if I get a good flow going and I am uninterrupted I will probably forget to check HN and I will continually work until I do get stuck even if that happens.

It's interesting to see research suggesting that this might be helpful, since I feel like I've stumbled on something similar. Lately I've had several instances where I've been completely stuck on a problem, so I'd stop everything and take a long hike on a trail or through some mountains. After taking a day of just literal wandering, I've found I'd be able to finally make progress on what I was working on once I pick it back up again, where I otherwise felt like I was hitting a wall. It's felt like my mind does some unconscious processing when I allow it to take a break, so it's encouraging to see some evidence to support that.

Having discovered some of these techniques through trial and error, it is helpful to see some research backing up and expanding on keeping the creativity flowing. Positive constructive daydreaming (PCD), exercise, taking a shower, napping, and more are helpful for getting access to more inspiration than just your conscious thinking can give you.

I'd like the authors to try Vipassana's 10 day retreats -- it's free and for ten days you meditate, don't speak, and don't use any electronic devices. Not sure if there's any more focus than that! You'll survive...and afterwards probably thrive :-D.

I believe state of the art methods aim for linear time complexity, using the Fast Multipole Method to perform matrix-vector products in O(N) operations rather than O(N^2), and applying that in an iterative solver such as GMRES, which typically requires a constant number of iterations, because the matrix is just a slight perturbation of the identity matrix.

Happy outcome but could have so easily gone the other way. Surely it would have been been more responsible to locally fake the registration of the domain first (apparently as easy as modifying /etc/hosts in this case) given he had no idea how the payload would respond? o_O

Not sure I'd be singing his praises if his rash decision had triggered the deletion of the encrypted files.

> the employee came back with the news that the registration of the domain had triggered the ransomware meaning wed encrypted everyones files...

Even though this fortunately turned out to be false, what if it had been true? Would the security researcher be held in any way accountable for activating the ransomware? If I were the author, I might be a bit more careful in the future before changing factors in the global environment[1] that have the potential to adversely affect the malware's behavior, but of course I'm not a security researcher, so I really don't know.

[1] I suppose a domain could probably be made to appear unregistered after being registered - depending on the actual check performed - but there are other binary signals (e.g., the existence of a certain address or value in the bitcoin blockchain) that might not be so easy to reverse.

> After about 5 minutes the employee came back with the news that the registration of the domain had triggered the ransomware meaning wed encrypted everyones files (dont worry, this was later proven to not be the case), but it still caused quite a bit of panic. I contacted Kafeine about this and he linked me to the following freshly posted tweet made by ProofPoint researcher Darien Huss, who stated the opposite (that our registration of the domain had actually stopped the ransomware and prevent the spread).

That's quite an high abstraction level programming thing to do to use a domain name registration state as a boolean. Is that a regular thing ?

What really amazes me about this attack is that the main attack vector seems to be exploiting a SMB vulnerability. Reasonable enough of a way to spread within an organization, but it's amazing that so many organizations seem to have this port and service open to the world for this worm to exploit.

I'm not the most diligent follower of security news, but I'm pretty sure that SMB network sharing is riddled with security vulnerabilities, latency issues, etc, and is generally wildly unsuitable for being left wide open to the entire internet. How could any institution with a competent IT department not have had this service firewalled off from the net for years?

Honestly, how stupid were the malware authors to use standard DNS for a domain that could take down their shit when they use Tor for the actual key and address communication and everything... it's like they half understood what they were doing.

Well, I guess maybe they didn't want things to get too out of hand and now if they want they can be back up soon with that fixed.

> "One thing that is very important to note is our sinkholing only stops this sample and there is nothing stopping them removing the domain check and trying again, so its incredibly importiant that any unpatched systems are patched as quickly as possible."

Pretty interesting, if I'm reading it correctly the existence of the domain is checked, and if is there, the program is aborted, in order to stop sandbox analysis.

I was wondering why they didn't just do a simple variant:

1) Instead of relying on DNS, which anyone can create, why not make a user account on some well known forum site. Like HN or Reddit.

2) Open the site, look for the user's page, and check his message titles by hashing them against some hash that can be in your code.

3) Detonate if you don't see the code, or the user account doesn't exist.

This would have the useful characteristic that you could start/stop the attack using just an internet browser, anywhere. And the code word that you are after would be crypto hashed, so the defenders would have to find your keyword somehow from the hash. Heck, you could confound everyone by turning the thing on or off according to location, time of day, and so on.

For extra points make it a blockchain thing. They're already using that for payment, right?

Great write-up. It's funny; a mistake/exploit allowed the malware; a mistake/bug allowed it to be mitigated...by the researchers mistaken intent that registering the domain would simply provide him with sample data.

To mitigate, I am running Debian as the host and jailing Windows 10 in a Virtual Machine, and have uninstalled SMB1.0 on the machine by going into > Programs and Features > Add or Remove Windows Components. I have also blocked port 445 (SMB) with ufw (On Debian)

sudo ufw deny out to any port 445

Aswell as this I am not deferring updates in any way and dutifully patching. I've always hardened Windows in this way and I've never had issues with malware, and if I did, the impact would be minimal because I've compartmentalized my files in such a way that even the worst malware would only encrypt some of my files and not all of them.

I store all my critical files in an offline environment (sandbox) so the only files that are going to be encrypted are replaceable (non important) and disposable. For example, I wouldn't cry if my C.V got encrypted because a copy of it exists in about 50 locations either offline and online.

Unfortunately I need Windows because my colleagues like to send Windows-only .DOCX files which work best in MS Word, and I don't have a Google account, so I can't open them in Docs. This is a conscious decision to permaban Google from my life, but Windows is staying.

Not surprising to see 14 year old unpatched software connected to the internet being hacked like that. At least, the ones in charge of budgeting these upgrades should pay a price for failing at doing so, the users are obviously innocent victims.

This story, if true, details a person who profiled this malware and correctly logged the network requests it was making and then correctly identified a fundamental vulnerability in the software. This is not an accident at all - it is rather a profile in supreme competence. We should recognize it as such.

>In certain sandbox environments traffic is intercepted by replying to all URL lookups with an IP address belonging to the sandbox rather than the real IP address the URL points to, a side effect of this is if an unregistered domain is queried it will respond as it it were registered (which should never happen).

"Dialog boxes asking for passwords are a very popular social engineering tactic designed to trick users into giving attackers their passwords"

Apple is extremely guilty of normalizing the frequent entry of passwords. I recently reinstalled a Mac and an iPad, and for each device I must've entered my Apple ID password seven or eight times. in the normal course of getting things done I then enter either this, or my local login password, many times a week.

When your password is twenty characters of line noise or an extended passphrase this is thoroughly irksome, especially on virtual keyboards like the iPad. It is no surprise to me that less security conscious folks, faced with this onslaught of excessive credential demand, choose shorter i.e. easily cracked passwords; and no surprise that everyone becomes less suspicious of the sham password dialog.

So when reading of yet another photographic burglary from a cracked iCloud account, we should always lay part of the blame at Apple's feet, for systematically normalizing the frequent entry of credentials.

That is not the end of Apple's social engineering enablement shame. Another glaring blunder is in Apple Mail, where the "To:" field is shown with your real name, even when the sender did not include this. The humans respond positively to the use of their given name, so this heightens the verisimilitude of scam messages.

Mobile OS security models are bound to land on the desktop soon-ish. What does any random App have to do with anything in ~/Library that is not its own Application Support or .plist preferences?

To be honest I don't mind if all Apps are sandboxed with the exception of a couple "user super-user"; I don't really care if my machine's root account is secure if all my horses sitting in $HOME are let loose on the net.

The standard macOS password prompt surely needs to change. It's become too familiar and I'm sure I've filled it in hastily before without wondering why or what for. It needs to be implemented in a way that is impossible for nefarious apps to replicate.

> Note: The domains in red were not registered at the time of my research, although they were registered last night by an unknown entity. They seem to be back up domains in case one of the first two stops working.

Or they could be domains for checking if you're in a sandbox like WanaCrypt. Why wouldn't you just use 20 well known domains otherwise?

>The malware obtains the time and date by creating a new environment variable called $hcresult that contains whats being returned by sending an HTTP request to the Google hosted link by executing this command:

This Handbrake outbreak could have been easily avoided. For instance, Handbrake could create a separate server on say, Amazon EC2 and have it download the file from their website every 30min or so, and check the checksum. If it's not right, then it flips a kill switch on the website.

Why were they going after 1password filevaults? I assume 1password is like keypass, where all your passwords are in an encrypted file? How could they decrypt all those files? Or do they assume people use weak passwords?

Does the Mac have any ability to warn when someone attempts to install malicious software, other than the usual warnings about unsigned software? Windows 10, for example, will scan every attachment before opening it, catching a lot of stuff before it can do any harm.

Bluetooth equivalent would be cool, then pairing bluetooth and wifi MACs based on temporal correlation. If one changes you can infer from the other. Over time you could split it to howmanylocalsaround verus howmanyvisitorsaround.

Wonderful article! "Smalltalk, like Lisp, runs in the same context its written in."

I have been programming professionally in Common Lisp (off and on) since the 1980s but there is something equally magical about Smalltalk. I have often thought that Smalltalk could be the language I use after I retire (I am in my 60s and I will probably stop working in about ten years).

> Smalltalk is powerful because all Smalltalk data are programsall information is embodied by running, living objects.

That's what Lisp systems do too. Program elements like classes, functions, methods, symbols, ... are first class objects. With something like CLOS you have a similar level of object-oriented meta-programming capabilities.

Many Lisp systems offer additionally to execute Lisp data using a Lisp interpreter and Lisp has a simple data representation for Lisp programs: Lisp data.

Smalltalk OTOH uses text as source code and usually a compiler to byte-code.

> because Lisp source code is expressed in the same form as running Lisp code

Only if you use a Lisp interpreter. Otherwise the running Lisp code might be machine code or some byte code.

> Smalltalk goes one further than Lisp: its not that Smalltalks source code has no syntax so much as Smalltalk has no source code.

That's a misconception. Smalltalk has source code. As text. It's just typically managed by the integrated development environment.

It's actually Lisp which goes further than Smalltalk, because Lisp has source as data and can use that in Lisp interpreters directly for execution.

Indeed, homoiconicity is a very powerful thing. It doesn't have to be core to the nature of the language, though; as far as I know, any Turing-equivalent language readily admits a metacircular interpreter, and so really a homoiconic language is a language with a compiler in the standard library.

As a thought experiment, imagine Lisp without macros. It's not hard; after all, "The Little Schemer" covers metacircular interpretation without ever mentioning macros. So what's going on? Apparently we don't need macros! But, we could add macros to a Lisp by reifying them in the metacircular interpreter. There's actually a feature in plain sight which makes this possible, and it's the humble (quote) special form. This is what makes code and data intermix so cleanly in Lisp.

This is why languages like Julia and Monte are not shy about using "homoiconic" to describe their language design; a standard library compiler is just as good as a compiler in the core semantics, as long as it's easy to use and meshes well with the rest of the language.

> What most of these languages seem to miss is that Smalltalks class system, like Lisps macro system, is a symptom of the power already available in the language, not its cause. If it didnt already have it, it wouldnt really be that hard to add it in yourself.

What most of these articles seem to miss is that that Java's designers were themselves expert Lispers and Smalltalkers, and they most certainly realized all that, and that Java's success is a consequence of them understanding exactly why not to repeat the same design. Design doesn't live in a vacuum. Design is shaping a product not just to fit some platonic ideal, but reality, with all its annoying constraints.

To understand why Lispers and Smalltalkers designed Java the way they did, I recommend watching James Gosling's talk, How The JVM Spec Came To Be[1], and the first 20 minutes or so of Brian Goetz's talk, Java: Past, Present, and Future[2].

Lisp and Smalltalk actually suffer from the same problem: late-binding sucks. When I was in college a professor once pointed out to me that he didn't know of an LL(1) parser for Smalltalk. There's a reason for that: Smalltalk's syntax is late-bound! It's almost like Forth's syntax: the reader consumes words and decides what to do with them on the spot, whether they represent variables, operators, constants, or parts of a message send and once it has a subject, verb, and objects, dispatches the message also on the spot.

This plays havoc with your ability to do static analysis, and languages that hinder static analysis should not be used in real-world systems. If the earliest you find out about errors is in a running system, it's far too late and you are hosed.

This is why the Lisp and Smalltalk Evangelism Strikeforces have been met with decades of failure, while the Rust Evanglism Strikeforce is getting on with a massive project of digital tikkun olam.

Worth remembering, especially for those just entering the software field: by the time a potential employer gives you an employment agreement to sign, they've already decided they want you. At that point, it's on them to give you a palatable offer. They may include a noncompete clause for one of two reasons: 1) to prevent you from working somewhere else at the same time, which can create all sorts of conflicts of interest, or 2) because it'll keep you from looking for a new job, and they think you're too naive to argue.

Here's my suggestion. When you receive the document, read it and see if there's a noncompete clause. If so, you're going to want to send a redlined version back to them, changing the noncompete duration from "during and for 2 years following employment at the company" (or whatever they gave you) to "for the duration of employment at the company." By doing so, you show your willingness not to do any kind of work for a competitor while employed, while very clearly pointing out that you do have the right to get a new job. It may be important not to offend the person who wrote up the agreement and included something so ridiculous, so the minor nature of your modification will allow them to save face.

In the end, most employers won't bother to argue the second point, and the ones that do are probably shadily taking advantage of you in other ways.

Additional note: in California and several other states, these clauses are not legally enforceable anyway, and you should mention that when you give them the "fixed" agreement.

The only reason big companies offer health insurance is because it limits employees's freedom. It would be easy for the Fortune 100 or 200 in unison agree to eliminate health care and provide a higher salaries. It would make the companies more competitive globally and it would free them from a whole lot of other nonsense, but they don't drop healthcare. The reason they don't droop healthcare is because healthcare and pre-existing conditions limit employee options and it suppresses wages. Also if there was universal healthcare it would be easier to start small companies and attract employees, those small business would be competing for employees against big companies on equal footing.

Here's a relevant quote (in which the author is actually quoting Aaron McNay):

"Both employers and employees would like to be able to train the employees if the cost of doing so is less than the gains in productivity. However, there is a potential collective action problem here. What happens if the employer provides the training, but the employee then moves onto another job? The employer bears the burden of the training costs, but does not receive any of the benefits. As a result, the employer does not provide the training, and a mutually beneficial trade is not made.

By preventing the employee from being able to move, a non-compete agreement eliminates the collective action problem."

I'm not saying that non-competes are necessarily good, or necessarily bad. It depends on the circumstances. But I do think that a lot of other commenters in this thread do think that non-competes are necessarily bad, and I think that's incorrect.

I had a previous employer trying to stop me from working directly for a client. Only, I had brought in the client, I was the only one working for that client and that client didn't want anything to do with the rest of my employer.

I felt morally OK with the situation...

Only, my contract did have a noncompete. But then, this is Sweden, and noncompete clauses are almost not enforceable by Swedish law. An employer can't stop an employee to take another position. To be a valid clause, an employer must offer the same payment the new position would have had whilst riding out the non-work period, and no one does that.

A strongly worded letter from my lawyer sorted it. Never heard from them again.

My last company's noncompete had a really nice twist: Instead of banning me from seeking employment at a competitor altogether, it instead granted my employer the right, to, at their discretion, compel me to delay starting at a competitor for a certain amount of time. However, in order to do so they would also have to pay my salary over that period.

In finance, companies will pay you your salary to not work if they decide to enforce a non-compete. It's written into the contract. I have friends who get to take year-long paid vacations when they switch jobs just because they work in HFT.

I'm surprised that this isn't law. I guess financial companies care about their employees more and/or their employees are more astute about contracts.

Companies shouldn't be allowed to prevent their ex-employees from earning a living. If it's that important for them to prevent the transfer of their proprietary information, they should be happy to pay for it.

If you're going to violate a noncompete, don't tell anyone you're going to work for a competitor. Keep yourself as small of a target as possible for your former competitor's legal team.

- When you quit, tell your now former employer that you're quitting to pursue something other than what was your established industry. Your (made up) lifelong dream of starting your own microbrew brand, Macrome supply business, winery, whatever. Or looking after a sick relative, or going back to school full time, etc.

- Cut off ties with all your former coworkers, at least for the noncompete duration. If you bump into them at the grocery store and you can't get away from them, tell them about how wonderful the beer business is or how your relative is doing.

- Don't put on Facebook or Linkedin that you work for the new employer.

- For the duration of the non-compete, only those closest to you who critically need to know about your new employer, spouse, etc will know.

- Avoid publicly-facing industry related activities that tie you to your new employer for the duration of the noncompete. Giving speeches, presentations, writing article, etc.

None of these are foolproof but they are all common sense. Remember the Monty Python sketch about How To Not Be Seen.

"California law prohibits noncompete clauses, contributing to the inveterate poaching with which the states technology industry was founded. It can be brutal for employers, but it helps raise wages and has created a situation where any company looking to hire a bunch of engineers in a hurry, be it an established giant or a start-up, feels it should locate there."

I've been sued twice over non compete language. The good news is they are reasonably hard to enforce because most judges will ultimately agree that people have a right to change employers. The bad news is it can cost a lot of money to get to the point where the judge says that.

It's important for software developers and in demand job applicants to push the trends. I refused the noncompete clause at my startup (still got job) and made a point of how I'm principled against themfor hurting people like the man in this article. We may be disconnected from the rest of America but maybe my little requirement can put the thought in people's heads that it's wrong.

Just for comparison: In Czech republic this clauses are legal, but their duration is limited by law and the ex-employer is required to provide you a compensation to the time that you are limited in the job market.

In Norway we added a law now from 2017 that the employer have to pay you the same salary for the period the non-compete is in operation. Maximum 1 year. It have you be in you contract up front, and they have to explicitly list customers and competitors.

If you're working in a small industry where specialized skills are required, and firms commonly collaborate, you may encounter unacknowledged/secret non-compete policies. Basically, nobody else will hire you, and they won't tell you why. If you've made some friends, they may tell you what's going on. But there's little recourse.

In the early 1990s, I'd co-founded an object database company, with a standard "east-coast-style" non-compete, which among other things, granted us injunctive releif. Our top developer left to work for our main competitor. We sued, and the courts ruled basically that there is no slavery in the US and our developer had every right to earn a living doing what he knew how to do. Maybe laws have changed, and maybe it varies by industry, but my experience is that noncompetes are meaningless. BTW, I don't particularly wish they had teeth, and my company was probably not significantly harmed by the outcome. Just saying I wouldn't sweat too much about signing a noncompete.

Since this should be illegal, or at least illegal absent some reasonable compensation for giving up the right to freely seek alternative employment (e.g. a big retention bonus), presumably our politicians offering "regulatory relief" are to blame?

The really annoying thing about noncompetes is that they're usually at the discretion of the employer. You might be in a situation where you have a 12 month noncompete and nobody wants to hire you 12 months in advance, but then your former employer terminates your noncompete within a month and stops paying you.

I've almost always been presented one, and I've always had it removed. It is a certainty I will compete, especially the more I become an "expert" in an industry, it's not a fair expectation. I work for startups, probably tougher at big corps.

My first (and last) non-compete was when I was starting out as a web developer in a small company. By the time I fully realized what I had signed I had contractually given up my right to work for any other webdev company for 1.5 years, and even worse, the company owner stated that he believed the non-compete also extended to all our clients (and the clients of a major client) too. This meant nearly all banks, Heineken, Google, and consultancy agencies (we ran a job board).

Fwiw, my understanding is that in right to work states a noncompete CANNOT prevent you from earning a living in your field. The clauses have to be defined as very specific, time limited and reasonable otherwise they don't hold up under legal scrutiny.

Stuff like, not being able to take current customers to a competing business within a mile for a period of 1 year is considered reasonable.

This prompted me to look at my employee agreement. Sure enough, there it is. I signed it because I needed the job and wasn't asking too many questions.

But this is interesting, I work in an area of the company that isn't really part of their core competency. Meaning that the kinds of firms that would hire me are literally in another sector and wouldn't be considered competitors.

So this fact, that normally manifests as complaints that "management has no idea what we do here" and/or that they "have no business claiming they're in this business," ends up helping me out.

this shit should be illegal. even small businesses are doing this now. programmers are a dime a dozen and everyone is using open source. fuck all these tech companies they don't have jack shit TO steal and force you to sign away everything anyway

The nations mentioned are some of the most advanced in the world, and their lower-than-world-average GDP growth is being used to show that "it's not all about wealth". I don't think this article makes a very good point.

I really don't think that this article makes a very good case for nations to focus on happiness over economic growth. It is apparent that the nations that the article states are the happiest are highly developed nations which have had decades of growth which has given them the means to keep their population happy. With regard to the point about China's growth, I believe that given how rapidly China is growing, there will be a significant lag between the rate at which GDP grows and the time when the Chinese people begin receiving the benefits of this growth.

Of course, general well-being is a better metric of whether the incumbent gets voted in than economic growth in developed countries because well-being is far more tangible to the common man than the abstract concept of economic growth. OTOH, in a developing country, I'd argue growth is a better indicator of the probability that the incumbent will win since developing countries have growth rates which are in general far higher than developed, and since these high growth rates result in visible, tangible changes: bridges get built, schools are opened, and people get jobs.

Perhaps, the new thesis of the article should be that developing nations should focus on economic growth, while developed ones should focus on the happiness of their people.

I read a similar things a few weeks ago. They were saying that the U.S. (government through laws and legal system) centered on "fairness" up until recently. Sometime after WWII the focus changed to Economics and growth. I'm guessing it was saying this is the cause of crazy inequality and "the jobless recovery."

I'm not sure I buy all of it (the U.S. wasn't really a world power prior to the world wars, so they'd be dismissing that and other gains if it was their whole premise), but like this article, something to think about.

Although I'd have to poke around for the specific sources, I've read papers that showed that, in fact, subjective well-being increases continuously with wealth (per capita GDP). However, the increase is not linear, but rather logistic--which makes intuitive sense, since a $5,000 pay raise for an employee making $25,000 a year isn't the same as for one making $100,000. On the other hand, I've also read that the increase in wealth past the much-referenced $75,000 level doesn't significantly increase emotional well-being (unconscious positive/negative feelings).

While this is pretty neat; FWIW-- I've always been blown away but the summarization tool built into MacOS. You just select text, hit summarize, and adjust the length. It works wonderfully-- I used it college all the time for annotated bibliographies. To be honest, I've always found it good enough and it's a wildly simple tool (or so it looks) by comparison to using AI.

An attempt at a more accurate statement would be that this emulates the use-after-move checking that the Rust compiler does. The problem is that that it doesn't do that either: it statically prevents copies but doesn't prevent use-after-move.

I think the accurate way to describe this pattern would be that it disallows copies and forces you to annotate moves with the move keyword. This is somewhat similar to what Rust does, in that non-Copy types are moved by default. The difference is that you don't have to write "std::move" in Rust: the compiler just infers the right thing to do.

It's a little hard to map this onto Rust semantics to begin with, since fundamentally all this is is not having a copy constructor, which is a concept that doesn't exist in Rust in the first place.

This is nice, but it just allows you to replace some of the the Management Engine code. What we need to know in detail is what it's doing. There's probably a backdoor in there that hasn't been discovered yet.

A true disaster always has more than one cause. It took many separate problems and mistakes to sink the titanic and cause such a large loss of human life. Same here. There can be endless and interesting discussions about the role of the NSA, Microsoft, the end users in this very specific incident.

But the root of the problem is, that computer security still does not get the proper awareness and attention. This starts from how we write software, but from a society point of view, mostly how we deal with computer systems. Computer systems are not toasters which you can replace easily. Often they are part of larger installations, difficult to replace as a component. We need to deal with them as with aspects of traffic or workplace safety, or hygiene. There should be a clear concept (I sincerely hope we don't require too strict state regulations) that like any professional tool, a computer system has to be reviewed in regular intervals for being fit for its intended purpose, and maintenance for security should be done as naturally, as mechanical or electrical checks.

So, for any computer-powered (and networked) device, this would mean, that either there is a maintenance contract in place, which in the end would mean, the provider has a contract with Microsoft, if Windows is used, or, like with any other device, the machine is no longer considered fit for professional use.

Because most people don't give a crap? Out of the 10 people who saw the news (on TV!) while I was there 9 reacted with "Ha! These hackers..." and 1 with "I'm pretty sure they are not interested in a guy like me, I'm safe haha".

Until people start losing personal money they won't bother educating themselves. They see these "hacking games" as, well, games.

> The money they made from these customers hasnt expired; neither has their responsibility to fix defects.

This is wrong. We don't ask for mandatory lifetime guarantees in any other industry I'm aware of, and perhaps more importantly, much of what is done in the field wouldn't be possible if it did (could you imagine having to continue to maintain an IE5 webpage for another twenty years?).

It goes on:

> In its defense, Microsoft probably could point out that its operating systems have come a long way in security since Windows XP, and it has spent a lot of money updating old software, even above industry norms. However, industry norms are lousy to horrible, and it is reasonable to expect a company with a dominant market position, that made so much money selling software that runs critical infrastructure, to do more.

If I buy a toaster it comes with a one year warranty, maybe. A nice car might come with a five year or two hundred thousand mile limited warranty. Microsoft sold a product at a fraction of that cost and supported it, unconditionally, for 8 years. 8. And they supported it for five more after that with appropriate arrangements with enterprises (and after a select few enterprises who somehow concluded that paying some engineering salaries at Microsoft for dedicated support was cheaper than upgrading). That's a 13+ year lifetime of support on what was an $80 a license product. Industry norms can only be "horrible" insofar as there's only been a serious industry for 30 years... And XP was supported for half of it (man, I suddenly feel old). My point is that there is no world in which the "cash-strapped National Health Service" is not the primary entity which was grossly negligent in its maintenance of critical infrastructure.

Stepping back and looking at the article as a whole and less at specific inflammatory parts, it is, well, filled with inflammatory parts. It starts as a thin attack piece on Microsoft for being slow to provide free support for a 16 year old product, offhandedly references IoT for some added scare factor, then starts calling for action (from both corporate and government actors) without any serious discussion on either the merits of the proposed actions or the impacts taking them would have on those organizations or the implications that they would create for future actors.

But hey, if you're a fan of Bruce Schneier's more recent musings, at least you'll enjoy the conclusion: That we must legislate software, and fast.

The world gets hacked because programmers make mistakes, and their management cannot evaluate those mistakes -- if only for no other reason that sometimes it isn't even obvious they made a mistake until a couple years later.

Users have been fooled: Turn it off and on, is a reasonable and well-known troubleshooting guide, but nobody blames the software vendor. If I'm on the phone with a company and they tell me to turn it off and on, I can't even point out "so you sent me something defective?" this is normal folks.

Maybe we need to teach programming younger and younger -- and it'll take two or three generations to become common enough that management will actually understand what I'm doing. Or maybe we need awareness campaigns to keep users from putting up with shit experiences!

Or maybe someone has some other idea, but the major barrier exists: We don't know how to program computers, and saying that out loud makes a lot of people with the job-title (or description) of programmer clam right up.

Cause the users dont know what they are actually buying. They go for superficial signs of quality - like weigth, design, surfacepolishing and nice UI.

Security of a object is a thing you can only evaluate the day it turns around and snaps at you.

Now the default american solution for this, would be to have a "Late-Adopter" plugin, allowing to install "Additional" Gated-Comunity-Security for the rich - and let the mob become one huge botnet, held back by aggressive campaigns of bricking whole device classes remote should they be a threat to the "devices" in the better neighbourhoods.

Unfortunatly the rest of the world is either too poor or unwilling to follow this model, which means we are going to see a regulated, securty TV checked model in europe and japan, state regulated devices in china & russia - and a wild west everywhere else.

Hopefully governments will one day take GNU/Linux based OS & Software. I know, I know ... Linux for Desktop is hard, but it seems like making exploits are harder than doing the same for Windows ( maybe hacker focus is on windows, who knows. ).

Anyway money equation I think is quite simple :

Why buy Windows, when you can use Linux and buy backup infrastructure.

It all started with poor ethics. Every single version of Microsoft Windows have intentionally left backdoors for NSA and some hackers knew how to use it. This is like you pay some money and buy a house, but the previous owner keeps backup keys to watch you. And some others get the backup keys, kick you out of your own home unless you pay them.

This is such a shame for Microsoft, NSA and American government. People trusted Microsoft products and purchased them, in return, Microsoft wanted more than money; they wanted to spy them for their ideological goals.

To howls of outrage, I have suggested to several companies that we simply disconnect from the public Internet. People programmed before cut-and-paste-from-SO was a thing after all. Obviously the web servers in the DC need to be accessible but the desktops in the office, or the critical bits of infra like DB, file servers and so on, nope.

Anyone who wants to surf can easily do so on their personal smartphone with no risk to corporate systems. No one has ever been able to put together a coherent rebuttal to my proposal, yet still the PCs remain connected and still people click things they shouldn't...

I would say the reason the world is getting hacked is quite simple: OS vendors are asleep at the wheel. Instead of actually improving their OS platforms, they're instead turning them into web browsers and game engines - while all the vital services that a modern OS should provide are being ignored in the rush for control.

Take for example, the Fappening. This was possible because iCloud. iCloud is only necessary - like Dropbox and other services like it - because OS vendors decided they didn't want people to have control over their content, using their local computers - that it was 'easier' to provide servers dedicated to the purpose, than to actually add dedicated file sharing to the individuals' computers.

(There are no really good reasons why your modern PC can't serve its own content - especially in this era of bandwidth and monster CPU power. We hosted the 90's Internet on far less powerful computers than your average mobile phone, with less bandwidth too.. the point is, the protocols.)

So I honestly think that OS vendors need to be forced back behind the wheel to make our computers better, and the "network is the computer" business model needs to die. This was always a terrible idea, formed on the basis of an accountants wet dream, and should be forgotten as soon as possible. Instead, lets build better computers, simple as that. Computers that are actually safe to use because they've been designed that way, from the get-go. The cloud must die.

Have had a lot of fun with TIS-100. It's a very weird architecture, but it does teach the fundamentals of low-level coding, like using a single working register, manipulating data in a simple way and using goto statements and branches for loops.

A lot of the problems are problems due to the architecture though, not necessarily hard to implement in more conventional architectures.

The article quickly goes over it, but for those who still wanna know more about the architecture, the TIS-100 is composed of nodes that can store a small number of lines of instruction code, have a working register, and have 4 I/O ports, UP, DOWN, LEFT and RIGHT. If asking for input, they will block until input is received from the specified adjacent node, and if passing output, they will block until the specified adjacent node asks for input. There are also memory nodes, introduced later in the game, to store more data.

These nodes are on a grid. Some of them are disabled, and the memory nodes' placement differs from program/puzzle to program/puzzle. Thus, careful selection of nodes and I/O ports is required for completion. I don't know if anything similar exists in actual hardware.

There's also a built-in "debugger" which simply allows you to run the program step by step and view all values, blocked nodes, and current instructions, which really helps, and possibly teaches players how to generally debug actual machine code. The programs run on a set of unit tests, and you can see which ones fail and why.

In classic Zachtronics fashion, there's graphs explaining your performance in the end, in terms of time, and space. Users not familiar with actual hardware architecture principles won't probably be able to figure out themselves how to get the best time, because most problems require use of pipeline-like instructions, due to the blocking nature of the nodes. So while it teaches tricks and fundamentals, I don't think it teaches more advanced and important stuff. And that's not a bad thing, it's a great game.

For you parents out there...what has been your experience with/advice for teaching your kids a programming language? It's definitely something I want my kids to get comfortable/familiar with early but I get concerned about over-exposing them to too much "screen time" at a young age and the deleterious effects that might have (even ones we don't know about yet).

Don't have kids right now, or any on the way, but that's something on the horizon for me so I've been thinking about it.

I always wonder why there is no 0x10c'ish game yet (it was a game prototype set on a space ship where you could program the computer), as there was a lot of hype around it and the idea seemed really nice (https://en.wikipedia.org/wiki/0x10c).

There are some games that explore that direction (e.g. space engineers has some kind of programmable block), but no successful ones in the spirit of the original 0x10c vision (which was pretty vague and maybe the hype and high expectations killed it). I still think that one could build a great game around the main idea, but probably it is hard to balance the game mechanics between "real" programming and actual game play without alienating users that want to get into programming and actual programmers that want to play a game.

It's kinda ironic that I bought TIS-100 when it first came out, played for a few hours, lost interest, and never really touched it since. But then, later I found great fun writing actual x86-64 assembler.

HRM was a lot of fun; I wrote an interpreter in C# to test the scripts and test them on a pc; thought about creating a website where people could posts new challenges and create optimal solutions; some changes to hard levels take forever in the game; I could just edit a few things and have near instant results.

I've had some fun solving very simple problems in brainfuck. This would be an interesting choice for a programming game because how simple it is. You can learn it in like 5 minutes. And also it's very easy to visualize the state. You can watch the program move around the tape and increment and decrement cells. Try it yourself, write a program to reverse an input string.

I live in Colorado and see the fires regularly. Just about two months ago I stepped outside on a weekend morning and smelled smoke, then saw a firebomber fly overhead towards the mountains. It's a strange thing to get used to, though I'm not near the forests, just close enough to get a tax-payer funded airshow once or twice a year.

Perhaps the idea of having forests near population centers should be reevaluated. Clearing a few km of city-bordering forest will create a gap in which fire will not spread, if there is nothing flammable on the ground.The aesthetics of having "natural trees" nearby is not worth the risk, and polluted air from forest fires definitely doesn't help the overall health.

> "Microsoft rolled out a patch for the vulnerability last March, but hackers took advantage of the fact that vulnerable targets particularly hospitals had yet to update their systems."

> "The malware was circulated by email; targets were sent an encrypted, compressed file that, once loaded, allowed the ransomware to infiltrate its targets."

It sounds like the basic (?) security practices recommended by professionals - keep systems up-to-date, pay attention to whether an email is suspicious - would have covered your network. Of course, as @mhogomchunu points out in his comment - is this the sort of thing where only one weak link is needed?

Still. Maybe this will help the proponents of keeping government systems updated? And/or, maybe this will prompt companies like MS to roll out security-only updates, to make it easier for sysadmins to keep their systems up-to-date...?

(presumably, a reason why these systems weren't updated is due to functionality concerns with updates...?)

One of the side effect if states participate in the proliferation of offensive tools. Won't be the last time state-sponsored tools, exploits or backdoors fall into the hands of interested third parties.

I think collateral damage like that is way underrated by politicians all around the globe that call for their respective intelligence agencies to build up offensive capabilities to be able to conduct cyber warfare and whatnot.

Pretty hellish knowing they'd let that quietly sit there, in the name of espionage. I'm not sure the benefits outweigh the damage they're doing, without even mentioning the chilling effect and lack of confidence this instills in IT everywhere.

Wow, the future is here and it's not looking very good. We need to diversify our OS's in the enterprise. This time it was MSFT next it could be linux. No OS gives an absolute guarantee. The systems are relatively dumb now what will happen when AI has gotten deeper into our everyday lives. This is a wake up call.

Wow, this is so insane. I really don't think the NSA should be finding vulnerabilities and keeping them to themselves.

I mean I get it is all to help stop the bad guys, but if you are keeping cyber weapons like this. You should be required to keep them as secure and locked as possible if you don't follow responsible disclosure.

Just like how a cop would keep their weapon on them, instead of sitting it down on the table while eating lunch.

What gets me is why we don't see more viruses that _deliver_ the patch to fix the vulnerability.

It's perhaps a little more difficult as you'd need a vulnerability to keep spreading the innoculation. Arguably, though you release the virus, let it spread and then trigger the innoculation using a mechanism like calling out to a webserver, just as the kill switch worked here.

Cyber attacks use patched exploit to attack systems running out of date software, even in large enterprises handling sensitive data?

I give a pass to individuals (bandwidth for updates can be expensive, regular users don't know about patch Tuesday etc), but enterprise scale deployment should have IT for this, and IT should have been well aware of this kind of thing happening.

While I can understand WikiLeaks position, I feel like it was incredibly short sighted and uninformed of them to release the code itself. Unless you believe that they are working with the Russian (and other?) governments to destabilize the west. Personally, I wouldn't be surprised if this was the case.

I was debugging a private web app today when I noticed a python script agent suddenly performing a port scan on me. it was querying for something called "a2billing/common/javascript/misc.js". After googling that phrase it seems im not the only person who has seen this today. The country of origin of the IP was Britain.

First of all, while I of all people love to pile onto the anti-NSA bandwagon (within constitutional reason that is, I don't advocate their abolishment, but that's a different conversation), there are quite a few non-three-letter related things that have contributed to this story and ones like it.

The primary issue at the heart of things like this, beyond the backdoors and 0-days is this: bad IT.

That being said though, bad IT is far too often the fault of upper management, and not the IT people themselves. After years of sysadmining, I've seen the inside of hundreds of companies, from fortune 500 oil to medium sized law firms. You know what they have all been doing over the years? Cutting costs by cutting IT. Exept... they completely fail to consider long term consequences, which end up costing more.

I blame things like this on two main groups. Boards of directors, and company executives. Far too often I ran into a situation where a company didn't even have a CIO or a CTO, and you had some senior one man miracle show drowning in technical debt reporting to a CEO or CFO and getting nowhere, and therefore getting no support, no budget, no personell, etc. I've seen exceptions too, but they are far too rare. If it's not technical debt that's drowning the company, it tends to be politics. The bottom line is forward thinking IT personell don't get heard, and inevitably companies hire people or an MSP with all the proprietary, cisco, microsoft, oracle, etc bullshit certs that make the C's feel better, but don't actually produce the wanted results. They inevitably end up providing an inferior product with inferior service at a short term cost just as high as doing it right the first time, and a much higher long term cost.

If I could say one thing that could help prevent issues like this, besides my standard whinging on about FOSS and the four freedoms and such, is that we need better CTO's and CIO's to advocate on behalf of IT departments, and I think senior sysadmins who feel they have hit a ceiling should consider going for their MBA's and transitioning to those titles.

Now, onto the NSA angle of the story. Well... all I can say is I told ya so, with an extra note that HN in the past few years has been surprisingly dismissive of FOSS proponents who have been warning about these things.

First they made fun of us for saying everything was being spied on, and then Snowden happened. (often followed by bullshit like "are you suprised?" or "what do you have to hide?"

Then we warned about proprietary systems, and then NSA/CIA tool leaks happened. (often followed by things like "but its for foreign collection only" and "but the NSA contributes to SElinux")

Ya'll aren't listening until after the fact, and that's not going to fix anything.

Medical offices are notorious for having machines out of date, not properly secured, and not backed up. Just recently I wanted to get test results from a few years earlier from a previous doctor. Nope, the machine they were on runs a proprietary GE setup and it crashed. The same test a few years earlier? The hospital lost them and had no record of them being done. A different test I had done a month ago was hooked up to an aging Windows XP machine. Yes, it was networked, though I'm unsure if it was intranet only (I doubt it).

In the US, you have to manage your own healthcare. Get every result as a hard copy or on disk (in the case of MRI etc) and save it yourself. And back it up. That way you're prepared.

If anyone reading this was effected by this attack, please take this as an opportunity to start the journey to become "antifragile". If you are severely effected by this (mainly speaking about ransomeware) it means you lack backups and the ability to self-heal infrastructure. These attacks will only get more frequent and more sophisticated. So, start now.

I hope the NSA can be hold accountable for this and we can finally all agree that a government holding on to 0-days and asking for loophole encryption always bites back to the very people they claim to protect.

The entertainment system on my flight is mysteriously down. I wonder if it's connected. As a side thought does anyone know the vulnerability of critical systems such as airliners, air traffic control etc?

It looks to me like common stupidity...people opening attachments that they should not be opening. No need to involve CIA NSA or other tree letters agency hacking tool...just old school phishing. I see this happening much to often....people opening *.pdf.js attachment. No need for another conspiracy theory...stupidity explains it all. Just my 50.

> The attacks were reminiscent of the hack that took down dozens of websites last October, including Twitter, Spotify and PayPal, via devices connected to the internet, including printers and baby monitors.

Lazy writing at NYTimes; what on earth does this attack have to do with the one at hand? It's not broadly the same type of attack, nor the same scale, nor the same outcome.

Is Russia being hit the most because it was the NSA the one that was exploiting this vulnerability before? Perhaps they are leveraging some other leaked NSA tool that gives them more direct access to Russian computers?

The US military and intelligence communities focused hard on cyber offense, rather than improving the defensive standards and technologies practiced among allies. Because of this, several allies have important systems compromised by (essentially) US-engineered malware.

Isn't it peculiar that Russia remains the least hit or not even hit at all? It seems like the West was a clear target. Connecting the dots here, it's suffice to say Shadow Brokers serves Russian interests.

We are seeing bullet holes from what seem to have been cyber warfare between the former cold war foes.

"He adds that the fear is that the ransonware cannot be broken and thus data and files infected are either lost or that the only way to get them back would be to pay the ransom, which would involve giving money to criminals."

Maybe it is now the time for a major review of the NHS Microsoft software dependency and should seriously consider switching to Linux based software.

Here is the BBC news update about the NHS Cyber attack:

"NHS trusts 'ran outdated software'

Some who have followed the issue of NHS cyber security are sharing a report from the IT news site Silicon, which reported last December that NHS trusts had been running outdated Windows XP software.

The website says that Microsoft officially ended support for Windows XP back in April 2014, meaning it was no longer fixing vulnerabilities in the system - except for clients that paid for an extended support deal.

The UK government initially paid Microsoft 5.5 million to keep providing security support - but the website adds that this deal ended in May 2015."

I remember reading this in the 90's and thinking it was yet another piece of dystopian speculative science fiction. Once or twice a decade I reread it to refresh my memory or when I cite it. Each time I'm struck by how it's gotten closer and closer to truth; and yet the story has not changed.

I don't remember where I read it (probably on a HN post or comment) but there was a great suggestion regarding organization chart. Instead of doing one only when you scale from 10 to 20 people like this article suggests, the idea was to do one on day 1.

On day 1 every position from CEO, CFO, to mail delivery messenger is filed with one or 2 names: the founder(s). As the company grows, you hire people and start delegating the work so that they can fill the positions on the organization chart.

Main gripe I have with PTVS: Please improve doc rendering, you can use sphinx to create rendered docs as Spyder does very well. It makes coding so much easier for use mortals you have not memorized numpy and every caveat. I'm specifically talking about the very large docs in functions, rendered equations, links and references. Putting all of that in intellisense hoverbox is unusable, make a separate window box.

Otherwise it's pretty good. I actually use both Spyder and PTVS and am unhappy with both. Bad doc rendering in PTVS, no git in Spyder.

This happened a few weeks ago.But it's just a ruling on a preliminary injunction motion.

That is, it's not even a final decision of a court.

So while interesting, it's incredibly early in the process.The same court could issue a ruling going the exact opposite way after trial.

As someone else wrote, basically a court rule that a plaintiff alleged enough facts that, if those facts were true, would give rise to an enforceable contract.

IE they held that someone wrote enough crap down that if the crap is true the other guy may have a problem.

They didn't actually determine whether any of the crap is true or not.

(In a motion to dismiss, the plaintiff's allegations are all taken as true. This is essentially a motion that says "even if everything the plaintiff says is right, i should still win".If you look, this is why the court specifically mentions a bunch of the arguments the defendant makes would be more appropriate for summary judgement)

To use Ghostscript for free, Hancom would have to adhere to its open-source license, the GNU General Public License (GPL). The GNU GPL requires that when you use GPL-licensed software to make some other software, the resulting software also has to be open-sourced with the same license if its released to the public. That means Hancom would have to open-source its entire suite of apps.

Alternatively, Hancom could pay Artifex a licensing fee. Artifex allows developers of commercial or otherwise closed-source software to forego the strict open-source terms of the GNU GPL if theyre willing to pay for it.

This obligation has been termed "reciprocity," and it lies at the heart of many open source business models.

The more important issue here is reciprocity, not whether an open source license should be considered to be a contract.

AFAIK, the reciprocity provision of any version of the GPL hasn't been tested in any meaningful way within the US. In particular, the specific use cases that trigger reciprocity remain cloudy at best in my mind.

Some companies claim that merely linking to a GPLed library is sufficient to trigger reciprocity. FSF published the LGPL specifically to address this point.

"Corley denied the motion, and in doing so, set the precedent that licenses like the GNU GPL can be treated like legal contracts, and developers can legitimately sue when those contracts are breached."

The GNU GPL was written on the basis that if someone does not accept its terms, then that without any other license from the copyright holder, redistribution puts that person in violation of copyright law.

Suing for damages on the basis of a breach of copyright law clearly does not require any contract.

So this is more about a technicality of the legal process in this particular case, rather than anything about whether copyleft is legally enforceable or not in general.

Specifically, because the motion denial was based on the defendant's own admission being deemed to be the agreement of a contract, this says nothing about the general enforceability of the GPL (future defendants could simply avoid making such an admission).

Further, since the ruling was in response to a specific motion, it only concerns the claims made in that motion: about whether a contract exists in this particular case. It says nothing about the "copyright violation if you don't accept the license" mechanism of copyleft.

Finally, the article does not provide any evidence that there has been any ruling that determined that the GPL is an enforceable legal contract, contrary to its title. The ruling as quoted just says that the defendant, by its own admission, did accept to enter in to the GPL-defined contract.

A friend of mine, who is a software engineer turned IP lawyer, made a good point about the GPL - the reason it "has never been challenged in court" isn't about uncertainty, but about certainty. The GPL is based on the most simple, bedrock copyright law. Despite being a clever hack, there's nothing legally exotic about it.

Any judge in the country or anywhere else would laugh a GPL challenge right out of court. Any any IP lawyer reading it would tell their client that that's what's going to happen if they try to challenge it. That's why it's never been fully tested in court... no need.

This is great - love or hate the GPL, it brings something unique to the table that no other license does and developers should have the ability to license their software under the terms that fits their motivation for developing it in the first place the best - the GPL does exactly that for many.

One thing I often wonder is how a company providing such open source software can find out (and proof) if someone is using it in a closed-source project. All I can think of is "guessing" based on behavior of the downstream tool.

Also, the article doesn't say much about how that lawsuit came to be. Did Artifex approach Hancom beforehand to notify them about the license infringement or just directly sue? I guess in this particular case, Hancom knew what they were doing, but I can imagine some (smaller) companies not being fully aware of open source license specifics and unknowingly running into a lawsuit.

Ask HN: What if the vendor had structured their product in a way that GhostScript is its own stand-alone app. Would they still be obligated to release their entire code, or just the portion that uses GhostScript?

moral of the story is, know you licences. Adhere to the license terms. Seek out projects with more permissive licenses if you plan to do closed source.

It is simple to work around licence issues with your project. You just have to put in the work. Know that your design may have to factor in extra time because you can't use lib XYZ because you have to write your own library to do the same thing. If using lib XYZ will save a bunch of time, then know that you will have to adhere to lib XYZ license. Maybe writing a wrapper application that you opensource, and your closed source application interfaces with might be a design consideration.

In the end, it's your project, your call. Just know when you make a decision you weigh the pro's and con's of going forth with that decision.

What happens if they claim they downloaded it from somewhere else that didn't include the license.txt file? There is no proof they ever were even notified of the license. (this is why we usually have people sign contracts)

> Of course, whether Artifex will actually win the case its now allowed to pursue is another question altogether.

It's fairly clear that they will win the case in one fashion or another. I am predicting that the case will quickly be settled out of court for a lump sum plus a running licensing fee. You have a public admission from the defendant that they integrated the plaintiff's Ghostscript software into their own without either: 1) making the resulting Hancom office suite open source, or 2) paying Artifex a licensing fee for the software.

The case against Hancom was solid under copyright infringement, and now has the added sting of breach of contract.

The article somewhat overstates the significance of this case in terms of precedential value.

On a procedural level, understand that this is a district court opinion and is not binding on any other court. Of course, if other courts find the arguments persuasive, they can adopt the reasoning. But no court has to adopt the reasoning in this opinion.

On a substantive level, it's important to look at the arguments the court is addressing and how they are addressed:

1) Did the plaintiff adequately allege a breach of contract claim?

We're at the motion to dismiss phase here and the court is only looking at plaintiff's complaint and accepting all of the allegations as true.

There are essentially only 2 arguments the court addresses: A) Was there a contract here at all?; and B) Did the plaintiff adequately allege a recognizable harm?

Understand that in a complaint for breach of contract, a plaintiff has to allege certain things: (i) the existence of a contract; (ii) plaintiff performed or was excused from performance; (iii) defendant's breach; (iv) damages. So, the court is addressing (i) and (iv), which I refer to as (A) and (B) above.

As to (A), the argument the defendant appears to have made is that an open source license is not enforceable because a lack of "mutual assent." In other words, like a EULA or shrink-wrap license, some argue that an by using software subject open source license doesn't demonstrate that you agreed to the terms of that license.

The court, without any real analysis, says that by alleging the existence of an open source license and using the source code, that is sufficient to allege the existence of a contract. The court cites as precedent that alleging the existence of a shrink-wrap license has been held as sufficient to allege the existence of a contract.

But the key word here is "allege." As the case proceeds, the defendant is free to develop evidence to show that there was no agreement between the parties as to the terms of a license. So, very little definitive was actually decided at this stage. All that was decided is that alleging that an open source license existed is not legally deficient per se to allege the existence of a contract.

As to (B), defendant apparently argued that plaintiff suffered no recognizable harm from defendant's actions. The court held that defendant deprived plaintiff of commercial license fees.

In addition, and more important for the audience here, the court held that there is a recognizable harm based on defendant's failure to comply with the open source requirements of the GPL license. Basically, the court says that there are recognizable benefits (including economic benefits) that come from the creation and distribution of public source code, wholly apart from license fees.

This is key - if the plaintiff did not have a paid commercial licensing program, it could STILL sue for breach of contract because of this second type of harm.

That being said, none of this argument is new. There is established precedent on this point.

2) Is the breach of contract claim preempted?

Copyright law in the United States is federal law. Breach of contract is state law. A plaintiff cannot use a state law claim to enforce rights duplicative of those protected by federal copyright law.

So, what the court is looking at here, is whether there is some extra right that the breach of contract claim addresses that is not provided under copyright law.

In other words, if the only thing that the breach of contract claim was addressing the right to publish or create derivative works, then it would be duplicative of the copyright claim. And, therefore, it would be preempted.

Here, the court held that there are two rights that the breach of contract claim addresses that are different from what copyright law protects: (A) the requirement to open source; and (B) compensation for "extraterritorial" infringement.

The real key here is (A), not (B). With respect to (A), the court here is saying that the GNU GPL's copyleft provisions that defendant allegedly breached are an extra right that is being enforced through the breach of contract claim that are not protected under copyright law. Therefore, the contract claim is not preempted.

(B) is a bit less significant for broader application. What (B) is saying is that because the plaintiff is suing for defendant's infringement outside the U.S. ("extraterritorial" infringement), and federal copyright law doesn't necessarily address such infringement, that's an "extra element" of the breach of contract claim. I say this is less significant because it wouldn't apply to a defendant who didn't infringe outside the United States. So, if you were the plaintiff here and the defendant was in California and only distributed the software in the U.S., argument (B) wouldn't apply.

I hope this clarifies what is/is not significant about the opinion here.

This is why if someone were the (usually) imaginary "Free Software zealot" that would like to prevent a private business from profiting off public work, it would be necessary for software not only to be under a Free license, but for the copyright assignment to be held by someone that agrees with said Free Software "zealot".

The GPL has such strong terms, I think there is good reason to avoid ever reading any GPL codebase. Tainting yourself may imperil any code you write for the rest of your lifetime. And to that end, I think github should place a large warning on any GPL repo before letting you see it, as well as delisting them from search results (or at least hiding the contents)

Can someone explain what is it about some crappy shows and Hollywood movies that it deserves such invasive attacks on device ownership?

When Microsoft tried Secure Boot there was a huge outcry. But when HBO/Netflix/Verizon/WB demand a complete lockdown of your device (to the point where AACS 2.0 demands you have a special CPU, Motherboard, GPU and more components that lock you out and disable themselves if you use custom software/drivers), then suddenly even on HN I see a huge amount of people defending a complete lockout from your device to the point where you're not allowed to even install a custom, better, driver.

What is it about some shows/movies that would be SO DAMAGING to whole society if a few people would be able to copy them on another device or even give it to a friend?!

This sort of thing doesn't surprise me from Netflix. It has been tightening up its rules for some time.

I dropped Netflix after whoever is in the group that decides policy for Netflix decided that Hurricane Electric's IPv6 tunnels are "a VPN" that is being used to circumvent Netflix's location checks with no warning.

(I'm aware of the DNS tricks I can do to only return IPv4 addresses in response to queries for the netflix.com zone. I choose not to do them and, instead, to not avail myself of Netflix's content.)

Anyone who roots their phone, in all likelyhood, knows how to pirate the content: granted Netflix is way more convenient, that's why I pay them. I don't watch on mobile devices so this doesn't bother me. If I did watch on mobile and I had a rooted device, they would definitely stop seeing my money though. When will these companies learn to stop going after the nerds; we're the one who actually know how to get around you if you piss us off.

Isn't it possible for a rooted device to fake beeing a non-rooted deviceto (selected) applications? To my understanding root means having thefull control but I fear that this definition doesn't apply to smartphones.

This is particularly absurd because it's trivial to record from any device that has HDMI out; HDCP 1.x is quite thoroughly broken, and there is a steady stream of HDMI splitters that can strip HDCP 2.x

This is kinda silly though. Pirates will not bother using netflix. You have stremio, pop corn time, the pirate bay and 100 of streaming websites with more content for free. If somebody is paying, let the client have the unlocked phone.

There are going to be people that blame Netflix for this, but it's really not their fault. They didn't even care if people used VPN's to access their service. Pressure from the content providers forced them to do this.

I wonder how they knew whether this is just one narwhal's or one colony's modified behavior or the behavior of all narwhals'. Sea mammals are pretty intelligent, who's to say that this one or this colony hasn't adapted some use for their task that is restricted to just them?

If "Zero Day: The story of MS17-010" is meant to be an accurate report of facts regarding MS17-010, then there is at least one inaccuracy in it:

> someone calling themselves "the Shadow Brokers" leaks a huge trove of classified NSA documents to WikiLeaks, who in turn dump it on the internet.

Shadow Brokers didn't leak to Wikileaks. Shadow Brokers uploaded the trove of NSA documents to `mega.nz`, and someone else downloaded the trove to GitHub[1]. Wikileaks merely tweeted about this after it happened.[2]

Correction: As per well-sourced Wikipedia article[3], this was not the `mega.nz` leak, this was another subsequent one. The main point still stand: Wikileaks has nothing to do with publishing the MS17-010 vulnerability.

Would be nice to stop pushing the false narrative that Wikileaks was involved in that one NSA leak.

Absolutely fabulous. Best part: "NSA hoard their knowledge of weaknesses in Microsoft Windows, a vitally important piece of their own nation's infrastructure, in case they'll come in handy againt some hypothetical future enemy. (I'm sorry, but this just won't wash; surely the good guys would prioritize protecting their own corporate infrastructure?"

Yep - way too implausible, even for hacker fiction.

Anyway, sounds like your book was Nostradamus-esque in depicting recent events. Maybe a bit too good :D

I still have vivid memories of, as a kid, stumbling upon this network of GeoCities pages about "Echelon" and how the US could read all of the worlds email and search for trigger words - and how absurd and tinfoil-hat-y it was made to sound by the rest of the internet.

Having this memory absolutely changed the way I've been viewing NSA related leaks in the past few years.

> surely the good guys would prioritize protecting their own corporate infrastructure?"

Let us not forget the used to be part of the NSA's mission. A part that was essentially abandoned early in the 21st century.

For example, the NSA required mysterious changes to be made to the DES s-box; many assumed at the time (as did I) that the agency wanted to weaken security, but it turned out, to quote Bruce Schneier, "It took the academic community two decades to figure out that the NSA 'tweaks' actually improved the security of DES."

I found this explanation pretty convincing as to why there was such a dumb kill switch embedded in the malware:

"I believe they were trying to query an intentionally unregistered domain which would appear registered in certain sandbox environments, then once they see the domain responding, they know theyre in a sandbox the malware exits to prevent further analysis"

It is funny to me no one ever talks about Mark Russinovich of Sysinternals fame and now reigning engineer of Azure cloud systems wrote a novel about such doomsday scenarios before the trend in the last 5 years or so.

That he wrote premier system introspection tools for Windows makes me think he must have been privy to the complexity of such things by colleagues discretely long before DREAD and SDLC fruits were born out in the Vista/7 era.

"ETERNALBLUE was part of a release of code that also gave us such interesting names as EDUCATEDSCHOLAR, ETERNALROMANCE, and ERRATICGOPHER. Oh to be a fly on the wall at the classified NSA committee meetings discussing the deployment of their weaponized ERRATIC GOPHER ..."

Any one know what the E means in code names? There's a list somewhere, but I can't remember where now.

You know that technical reviewer's past it. Thirty years ago he was planning world war three from bunkers underneath volcanos, and holding the world to ransom with diamond-encrusted lasers in space. Whereas last year all he could come up with was a grand scheme to become a multinational government IT contractor, while moonlighting a side business clearing derelict buildings for redevelopment.

And in a matter of hours, the new malware, known as Wanna Decryptor, infects the entire British National Health Service, a Spanish cellphone company, FedEx, and over a third of a million computers whose owners had lazily failed to enable automatic security updates from Microsoft.

Besides the false association of TSB and Wikileaks that others have mentioned, I have a huge problem with this. Someone who gets kidnapped by pirates (The Shadow Brokers) while running from a press gang (Microsoft) is still a victim. Calling them "lazy" is an easy way to avoid the hard work of apportioning blame correctly.

A hell of a lot of that blame goes to Microsoft themselves, for turning an important security update service into a marketing channel. Maybe Stross gets around to pointing that out, but I stopped reading there.

Some thoughts: While working in finance I was able to talk to many people who's work was related to economics. Employees at banks and brokerages, governments and regulator bodies. Most had heard of Piketty, and many agreed with his basic premises (r>g, and all it entails). His reach actually surprised me.

But I also had contact with academics, and like the article said, it's not so much that academics are refuting Piketty, but that they simply aren't studying the same problems that he is talking about. From what I've been told, a lot of academic work in economics is focused on incredibly unique and specific problems. It isn't "fashionable" to be studying something so broad and perhaps abstract as inequality.

They're not. When The Economist agrees you have a point then you're pretty mainstream. True, some economists are nitpicking on his book because they don't like the conclusion, but the problem Picketty has identified is rarely denied.

I read Piketty's book, and some of the critical response. In my semi-educated opinion (I only have a Bachelor's in Economics), the criticisms failed to poke a hole in his main argument: that capitalism as a system is unegalitarian because it allows wealth to grow faster than economic output, resulting in increasing amounts of income inequality over time.

>>But perhaps the greatest rebuke of Piketty to be found among academic economics is not contained in any of these overt or veiled attacks on his scholarship and interpretation, but rather in the deafening silence that greets it, as well as inequality in general, in broad swathes of the fieldeven to this day.

The reason for this deafening silence is simple: the truth revealed by Piketty is inconvenient, and there are no easy solutions.

Very few economists I know of disagree with Piketty's conclusions. Everyone knows he's right. But it is a very inconvenient problem, and most economists and leaders have a vested interest in not solving it.

Piketty's idea that r > g leads to wealth inequality just makes me shrug. You can't really have it another way.

If r (the rate of return on capital) is less than g (the growth in output) that means that people have no incentive to build wealth or become more intelligent to deploy that capital more profitably. If I'm never going to make more than the growth in output, why bother with capital?

The logical conclusion to that would be that everyone wants to be an employee and nobody wants to be an employer.

If I can grow my wealth more quickly than the nations output, I'm grabbing a bigger slice of the pie. The hope with capitalism is that I'm grabbing that pie because I've earned it and the market hopes I'll be able to steward that wealth.

So yeah, capitalism may be inherently inclined to wealth inequality because some people outperform. But do you really want it another way?

There certainly is wealth inequality in the world, but it isn't actionable to blame it on r > g. It's more effective to look at things on a micro basis. Does this person have a child that is prohibiting them from saving? Why is the person being excluded from jobs? Do they have a proper education?

Saying what people want to hear does not make you a good researcher. They don't take him seriously for the same reason we don't take young-earth creationist researchers seriously. It's not good research.

Edit: I'm not an economist, and I'm not going to do justice to the criticisms (which aren't hard to find), but fine, here are links:

I'm new to fuzzers and fuzz testing in general so I apologise for my ignorance about the purpose of fuzzing. My understanding is that fuzzing tests the user facing side (which is what is important for most programs). Does there exist similar tooling for testing the system-facing side (i.e. the stack below your application) to check your applications error handling, for example and uncover corner cases. What I'm getting at it something like syzkaller but for userspace, so library functions beneath your application would return wrong values and you get to see how your application responds to them.

This fuzzing is interesting stuff. Does anyone know of an in-process or otherwise lib for the JVM? Findbugs is mentioned in here but I'm not sure if that does fuzzing (maybe a plugin?).

Seems in my mind to be a nice complement to achieving code-coverage with testing i.e. whereas unit/integration testing might test the various code paths with a few good/bad values, this then throws every possible input value at them to see what breaks.

This notes that they disabled reading config files in order to own life as the default setup. I assume that with more time it would be wise to try and fuzz as many configured options as possible as well?

Relevant & cogent discussion from 1981 Nightline I found on Obscure Media sub-reddit just yesterday. Jobs makes some spot on predictions but managed to avoid speaking too directly on privacy. The author is not nearly as charismatic nor accustomed to speaking on camera/in public... and makes some validated predictions, too.

Intro is a good watch for nostalgia and perspective; relevant Jobs interview starts @ 4:20.

I wonder if the people who wrote the report were also considered cuckoo crazy conspiracy theorists then (as Richard Stallman has been since around the same time).

Good thing they gutted it in 1995, I guess. Congress didn't want the public to find out about such facts.

> Criticism of the agency was fueled by Fat City, a 1980 book by Donald Lambro that was regarded favorably by the Reagan administration; it called OTA an "unnecessary agency" that duplicated government work done elsewhere. OTA was abolished (technically "de-funded") in the "Contract with America" period of Newt Gingrich's Republican ascendancy in Congress.

> When the 104th Congress withdrew funding for OTA, it had a full-time staff of 143 people and an annual budget of $21.9 million. The Office of Technology Assessment closed on September 29, 1995. The move was criticized at the time, including by Republican representative Amo Houghton, who commented at the time of OTAs defunding that "we are cutting off one of the most important arms of Congress when we cut off unbiased knowledge about science and technology".[1]

> Critics of the closure saw it as an example of politics overriding science, and a variety of scientists such as biologist PZ Myers have called for the agency's reinstatement.

"Civil rights in the future could be threatened by a bloodless adversary -- the computer.

"That's the opinion of the Congressional Office of Technology Assessment in a 116-page report released late last year.

"'Extensive data collection and possibly surveillance by government and private organizations could, in fact, suppress or 'chill' freedoms of speech, assembly, and even religion by implicit threats contained in such collection or surveillance,' the report said....

"[T]the use of an electronic funds transfer system to gather the same type of information would be far more intrusive, since much more data, some of it of a highly personal nature, could be collected in secret."

The rich can win trials... "Before a trial, attorneys for both sides routinely obtain the names of potential jurors on the day of jury selection. Its now possible using big-data sources to flag or score potential jurors on certain factorsfiscal and social ideology, for examply, or on attitudes relevant to liability or damagesenabling lawyers to make exceedingly nuanced strikes.

For those interested in early explorations of computers, rights, and privacy, there was another large survey article published in a magazine ... sometime in the early 1970s which for the life of me I cannot find now.

It detailed government and business computer use, and was early, closeer to 1970 than 1980 as I recall. Several pages, fairly prescient and well written.

If anyon can reecognize the piece from an admittedly vague description, I'd appreciate a link. I've seen it online, if that helps.

Finally, can't believe it took them this long. The sorry state of the update situation is one of the worst things about Android. Next step would probably be to provide an API to the OEMs so they can add their "value add" functionality as apps, so Google can push updates to all phones regardless of hardware drivers and OEM modifications. And maybe make it possible to update emoji via the Play store, instead of needing a new system update. I don't like the blank boxes in messages from my iOS friends.

I wonder if this means that Google will lead by example and prolong the time they deliver updates to their own phones. They don't guarantee new updates to their current Pixel phones after October 2018 [0], which is not good enough.

"device makers can choose to deliver a new Android Update"... "can choose".

Preferably they shouldn't be able to choose. Google should be in charge of updates and manufacturers should have to make a special effort to prevent an update. i.e if they are certain that an update will brick their device they would then make a formal request to google not to send the update to their devices.

Android finally adapted the approach of Windows on PC - OS maker dictates the software pieces on all devices, the device makers only create the hardware and write drivers (optionally, some bloatware). I believe this is the right/better approach, and it solves no only the Android update hassle, but more importantly the fragmentation issue.

This should usher in a new era of cheap phones that upgrade immediately to the newest version of the Android OS.

It lowers the price floor for a shiny new phone. All of these additional features are expensive to create but, they are differentiators. With this, Google has the ability to push more new features on the base OS. By conforming to this standard, Google make it easier for them to compete with all of these manufacturers' features.

Now it's up to them to make compelling reasons to upgrade their phones beyond apps. I see things like Google Assistant, Mapping, etc. being more integrated into the OS so that you are always in the Google system no matter what app you are currently in.

This is a big and brilliant win if they can first pull it off technically and then pull it off with compelling services. They certainly look like they are investing heavily in both.

I look forward to a $99 or $199 (or $49 if you can stomach sketchy Chinese phones) phone that just keeps getting better and better and better for free as long as the phone works. This also makes a very compelling thing to make the phone into a computer once the battery can't hold a charge, etc. Take the guts or use some kind of USB->HDMI out and make it into a TV app or a digital mirror or another internet station somewhere.

I am amused that their graphical representation of the Android version customized for a particular model of phone is "Android mascot dressed up in a really cool spacesuit looking thing" and not "Android mascot with bags of trash stapled haphazardly to him," which would probably be more accurate.

Maybe we can benefit from this in 2 or 3 years? I'm very pessimistic... it takes LOOOOOOONG before vendors will look into Android O and the interfaces and the first generation benefiting from this will be earliest Android P updates. And do not forget: This whole process does not reduce testing time and the carriers might also look for long testing on updates ;)

If Google actually implements a way of pushing those underlying Android updates directly to the phones then I think they might actually be successful. If Google end up still relying on the manufactures and carriers to push those updates out, then what incentive will they have to keep the phones updated?

It is my opinion that Google does not view Android as simply "an operating system for phones". Android has tremendous application in IoT devices and appliances. The lifecycle of many applications is quite a bit longer than the cell phone.

As we see an increase in the diversity of applications using Android, this upgrade path will be very important. Just wait until you see your first ATM or POS system "Powered By Android ".

For someone that is considering Android, coming from iOS, this is a brilliant idea that should have been implemented long ago.

For example, a lot of Android phones are running 4.4 and 5.0 in this part of the world. Those versions are pretty bad and the people that bought Android 4.4 and 5.0 actually do not know what they are missing and how to actually update their OS since there is no way for them to do that for now.

I hope that with this Treble, there will be a lot more Android phones(from Chinese makers) that can update base Android OS to the latest one much more frequently.

So this will get users on to the next Android Framework version but if there are security bugs in vendor implementation or underlying firmware it'll still continue to be problematic for users. But it will solve the PR problem for Google if OEMs and Carriers update the framework version quick enough - the question raised mostly by tech pundits - when am I going to get the next update to Android - will have a satisfactory answer.

Not to say this isn't a huge step forward from status quo - if vendors contribute features and fixes to MediaServer and everybody uses the same implementation it will be much easier to update it for all vendors.

What still sucks is this is not going to be Google that will update the Android framework - it's still OEMs and the carriers.

I think this will be pretty successful. Ultimately the manufacturers want to do as little software work as possible. If Project Treble gives them easier/less work to do, then they will adopt it quickly.

One potential side benefit of this type of work: vendor kernel drivers tend to be insecure buggy pieces of crap. Vendor Treble drivers will surely still be insecure buggy pieces of crap, but they might be sandboxable. If Google really has its eyes on the Magenta kernel, I imagine that Treble will be runnable in user mode, so I bet it really will be sandboxed. This would be a huge win.

It's incredible to me how long it took Google to realize this was their fault. A lot of people here have bought the "blame the OEM" nonsense for a really long time, and you can see the comments here reflect that.

But in reality, there's a huge expense to all the work of updating devices to support Google's rapid change cycle for dozens or hundreds of different models, and the problem stems first and foremost from that lack of abstraction layer.

This is likely a first step to finally catching up to Windows Mobile: Making the core OS upgrade come straight from the actual OS developer, so that the company that writes the code is actually the one that updates the code.