"For more than a decade, neuroscientist Grgoire Courtine has been flying every few months from his lab at the Swiss Federal Institute of Technology in Lausanne to another lab in Beijing, China, where he conducts research on monkeys with the aim of treating spinal-cord injuries.

The commute is exhausting on occasion he has even flown to Beijing, done experiments, and returned the same night. But it is worth it, says Courtine, because working with monkeys in China is less burdened by regulation than it is in Europe and the United States"

I personally know researchers who have to take similar measures to be able to do their research.

When we are happy as a society to slaughter billions of animals every year for food (usually keeping them in appalling conditions beforehand) I don't understand how we can justify the restrictions we put on scientists.

"The problem I see with this is that they implant monkey before paralysis and can use machine learning to build a model of the monkey's unique representation for walking. Then they paralyze and show the monkey's can adapt to only using their interface in 5 days. There's no way the mapping from neural activity to stimulation is generalizable..." [1]

I struggled with this while in graduate school studying neuroscience. Fortunately, I was studying crustaceans so there were not too many ethical issues. But there was rodent and monkey research happening in the school. I choose to return to my first profession - software. I assume that I'll be long dead before there are ethical issues with experimenting on AI.

Animal research is something that we as a society are going to struggle with for many years into the future. I would like to be able to argue that it is for the greater good of the planet, but I don't think that I could put up a good argument that we are being proper stewards of the planet.

This is absolutely fantastic and its a great step in the right direction but we are still years if not decades away from being able to circumvent spinal lesions in humans.

One of the major problems with human spinal lesions is the loss of control of the trunk of the body (ie the core). Humans have dozens of muscle groups that control balance and posture through minute movements. Without fine grain control over these muscles balance is going to be extremely difficult.

We have seen many such interfaces in recent years, but as I understand these techniques they all suffer the same problem. The body eventually coats the electrodes in a non-conductive layer, essentially scar tissue, and the whole thing grinds to a halt in a matter of weeks. Until that practical limitation is surmounted, it seems very wrong to treat animals in such a manner. Watching the paralyzed monkey (also the rat) struggle in obvious agony as the commentators cheered was not fun. This was not the sort of thing I had expected to see on a program clearly aimed at a general audience. This was the very dark side of animal testing.

I saw this story on a canadian news show a couple days ago, a show that regularly does spots on human exploration of Mars. The stories are similar. The headline feel-good story is about the shiny new mars habitat or spacesuit someone is testing, but they haven;t figured out how to actually get it to mars. The result is the false impression that the dreamed future, colonizing Mars and curing paralysis, is closer than is true.

I personally want to just have an RSS reader that gives me the articles from the sites I freaking asked for. Not what some service thinks I'll like. Google, Pocket, and other sites are already doing a great job of thumbing through what I read and trying to sneak in their "suggestions": that's the kind of invasive data-combing that I want to avoid.

About 90% of what I see is irrelevant, but I need to see it to filter out the relevant data. I'd pay a decent amount for a service that can tailor itself and deliver relevant results without much manual intervention. InoReader and some other RSS readers have decent filtering, but it's all manual.

So this excited me, but without a way to import my existing feeds it's largely useless to me.

In the category of self-hosted stream readers there's also Tiny Tiny RSS [0], which I use and enjoy. It has a nice Android app and pluggable scrapers for extracting webcomic images and full content from summary feeds. Although it's been described as having "the most hostile primary maintainer of a piece of software I've ever encountered", which I experienced a bit of personally last time I tried to contribute upstream.

My only criticism would be that the Getting Started page is definitely skewed towards a specific segment. Like, how is Sports not an option? Music? I understand that there are only 9 slots in your designs, but please choose some that are more representative of general interests, not just Palo Alto mid-20s coffee shop startup founders.

I'm confused. It looks like this won't work unless you use the stream API, which is (apparently) not open source / self hosted.

The only reasons I can think of to host my own RSS reader are to prevent third party profiling, editorial transparency, and future proofing against the provider shutting down or evolving the service in a way that I don't like. This provides none of those advantages.

Digg RSS Reader has been unbelievably good. The best thing about it (so far - knock on wood) that they don't tinker or change anything, because everything is working like it should. I don't really see a reason to try anyone else.

Nick here from GetStream.io (Winds). We didn't expect Winds to see so much traffic today, so we're experiencing temporary downtime. We'll have everything back up and running as soon as possible. Thank you for all of your support!

Believe me, you read this kind of articles from a complete different perspective when the "beating heart corpse" is your mother and the decision to disconnect her is on you. I hope nobody has to do it in her lifetime.

Going off on a bit of a tangent, it's interesting to think about how 100 odd years ago if your heart stopped, you were declared dead. Then we got defibrillators and CPR, etc. and suddenly if your heart stopped you could still be resuscitated.

Further, nowadays, if you have a heart attack and no one makes an effort to resuscitate you, and there's someone who knows how to nearby, most people would argue that that person has forgone a moral obligation.

This is one of the arguments made for cryonics[0], in two ways. If in the future it's possible that we'll be able to revive a cryopreserved body, then you can't really consider someone dead if they're cryopreserved within a certain timeframe after their heart stops (just as today you can't consider someone dead within the ~5 minutes after their heart stops, because of CPR, etc). What's more, the people of the future will have a moral obligation to revive cryopreserved bodies, because not doing so would be the equivalent of not attempting CPR on someone who just had a heart attack. This is the response given when someone asks why anyone in the future would be bothered to go through the hassle of reviving cryopreserved bodies.

For a lot more on this topic than you ever wanted to know, I highly recommend Dick Teresi's "Brain Dead". It's a scary and fascinating look at the so-called "death determination" industry that addresses one of our society's least-asked questions: in an era of unprecedented medical skill and technology, who draws the line between life and death, why do they draw it there, and how much are they getting paid?

As medicine gets better and we develop a better understanding of how our bodies work this question will only get more difficult. At some point it will be possible to program your cells to repair any damage to your body including regenerating limbs or organs. And when you can do that, taking someone who is "dead", hooking them up to the "autodoc"[1] and having it repair the damage and restart them, then the only question might be how many short term memories did you lose? And if you can download those some how then even a partly decomposed body could be the source material for reconstructing the original, not a clone per se because parts of the original are re-used but certainly a full retrofit. Then a lot of built in human assumptions get really challenged.

[1] This is a name used in science fiction stories to describe the machine that mechanically repairs tissue damage.

I for one would be completely fine with doctors harvesting my organs after my brain is sufficiently damaged that I won't experience conscious thought again, even if my lizard brain is still alive and supports all bodily functions.

Food for thought for people reading the above: if a pregnant woman has a cardiac arrest, you have 3 minutes to decide if you are going to perform an emergency Caesarean section to save the child's life, or else you kill it too.

Is there a medical person here who can help me out with this? Take these excerpts from the article:

"He suggested using the newly invented stethoscope to listen for a heartbeat if the doctor didnt hear anything for two minutes, they could be safely buried."

"An electrical engineer from Brooklyn, New York, had been investigating why people die after theyve been electrocuted and wondered if the right voltage might also jolt them back to life."

"Starting in the 1950s, doctors across the globe began discovering that some of their patients, who they had previously considered only comatose, in fact had no brain activity at all."

"They had discovered the beating-heart cadavers, people whose bodies were alive though their brains were dead."

"In some cases, their hearts kept beating and their organs kept functioning for a further 14 years for one cadaver, this strange afterlife lasted two decades."

Now... If the brain were dead (by my understanding of a 'dead' organ - its cells have died), wouldn't the brain start either decomposing or being absorbed by the body? If that happens, it's a pretty clear indication that the brain is truly dead. But if it doesn't happen, doesn't that mean the brain cells are still alive (just not communicating for some reason)? And in that case (living cells with a blank EEG), couldn't there be a way to jump-start their communication, as was previously discovered with hearts?

I'm sure that I'm completely ignorant of some critical factor, and look forward to your thorough discussion of it...

> It goes back and forth as to what people call them but I think patient is the correct term, says Eelco Wijdicks, a neurologist from Rochester, Minnesota.

I can't tell if this is an error, as if it should have been "[...] patient is the correct term" or if the neurologist is trying to be funny, or make a point. IMO the writer/editor should've omitted this quote or clarified its context and overall meaning to the article.

Does anyone know if I have the option to import and SSH private key and whether or not that data is stored in the secure enclave? If so this would be very valuable for working on the go, but I'm very weary of the security if it will be used for work.

I'd love to see some documentation regarding he security, and some proof that keys are not exportable.

Marvellous! I tried an iPad Pro about half a year ago, but the lack of a good ssh/mosh client made me return it. Perhaps I'll need to check it out again -- especially the smaller Pro seems like it could be a fine choice now for a portable terminal.

This is surprising, and it could mean that early commenters have their voice heard more than later commenters. But it could also just mean that topics raised by early commenters end up being discussed in those threads, and commenters on both sides have their voices heard.

For example, if the first comment is roughly "I read the linked article and disagree because of A, B, and C" then someone with an opposing viewpoint would probably reply to the existing first comment instead of creating a separate comment.

This doesn't mean that the second commenter's voice isn't heardit just means it all happens under the thread created by the first commenter.

Back when I was a regular on LessWrong and it had an active community, it didn't have that problem at all, despite using a variant of the Reddit codebase. Their solution was pretty simple, just show the five most recent comments in the right hand column. People would interact with them, check out their parent comments, and threads would grow organically regardless of age. It might be tricky to scale that to larger communities though.

The same thing happens with search engines as well, where items at the top of the search results tend to stay at the top because they're purchased or clicked on at a higher rate. Can anyone chime in with what they've done to fix this in a search context? Has anyone had success with a one-armed bandit or randomization approach?

Vote inflation might serve as an interesting dynamic for allowing other's voices to be heard. While a new vote will always carry the same value, acquired votes may suffer from inflationary decay, providing a rotation on viewed comments. You could probably modify this inflation based on thread velocity.

I thought this was going to be a link to [1], from a talk given at Swift Summit a couple days ago. I haven't really looked at the code for either one to compare them, but for people who are interested, here's a second simple genetic algorithm in Swift (found in RubikSwift/GeneticsSolver.swift)

I love that the author provides docker images for cross compilation. Cross compilation environments are such a pain to setup and maintain - having docker images with all that stuff in is a major step forward.

Next time I want to build a desktop app I'll have a go with this for definite.

Once I have been building Qt Widgets application in Python with PySide bindings. I needed to do some simple processing in several SLOTs and decided to use lambdas as an easy replacement. Kids, do not even think about attempting to do this at home, seriously.

I have run into some extremely evasive random crashes and stuff. I did not dig to the bottom of the issue, although my guess is that Python lambdas are executed in main (constructing) thread context and that causes race conditions.

My question: how do Golang's goroutines interact with native threads spawned by Qt?

Question. Does QT still have a licensing problem for closed-source apps on mobile platforms?

Last I saw, QT's GPL/LGPL license required you to open-source the code that was statically linked to it. That wasn't much of a problem for desktop platforms, as you could just package up all of QT into its own QT.DLL (or the equivalent) and link to it dynamically. But this isn't allowed on stuff like iOS, so you were required to buy a license for those.

The biggest pain for me in managing a system is handling all of the service configurations; making sure the ACLs are in place, that the configuration files are all in the right place and in the correct format.

Serverless isn't saving me any of that pain. I still have to configure ACLs for everything, web folders in S3, ensure that my backend isn't hitting any concurrency or timeout limits, ensure that all of my routes are set up properly in the API gateway, that my DB queries are properly tuned and that someone is watching that DB...

All it's really buying me is not having to write Ansible scripts, but instead I have to write CloudFormation templates. Sure, I have to think about maintenance and troubleshooting less, but when I do have to do troubleshooting, I'm for a long, frustrating day.

As easy as it is to create VMs anymore, the serverless story is nowhere near as compelling as it should be. It makes the easy things easier, and the hard things really f'ing hard.

Concrete example: I wanted to just store the contents of a github webhook post in S3, after verifying the hashed secret. Should be a simple case of wiring together the API gateway to S3, right?

First bug: You can't test an API Gateway -> S3 connection if they are both in the same region. Known issue from back in 2014.

First hurdle: You can't pass the contents of a post to an auth hook in API gateway, just a single header value. That means I can't use the API Gateway authentication hook for this purpose; github creates a HMAC hash of the post contents.

Second hurdle: Finding the proper Velocity Template Language function (and its location in Amazon's custom libraries) to escape the JSON from the webhook body so I could pass it on to a lambda function. It's '$util.escapeJavaScript($input.body)' by the way. You're welcome.

By this point, I was wishing I had just set up a t2.tiny server running flask.

Tangentially related question. What deployment tools are people using to manage systems with many lambda functions?

We use SWF to trigger over 50 separate lambda functions in processing. We've got some very nice internally developed tools to identify which functions are out of date and help with deployment. I'm just curious what else is available to handle DevOps tasks in a Serverless environment (i.e. deploying library updates, etc).

Has anyone used the AWS API Gateway service with lambdas to set up a small backend for a web service? The next thing I'm thinking about setting up is a slush reader with a GUI, and I'm wondering how difficult it will be to pipe binary files from S3 -> Lambda -> API Gateway -> client. It looks like I'll probably need to encode the binaries to base64 on the lambda side, and will need a library for that.

You don't need crypto. You just need a machine that prints out a human readable receipt that the voter can see but not alter, which then drops into a secure holding area on the machine.At the end of the day, you randomly select say 1% of all the machines and hand count all the ballots inside, making sure the counts and votes match. If they do, then you can be reasonably sure it wasn't tampered with, and if they don't match, then you can hand count all the paper ballots using the old system to verify the computer.

The main problem with digitally signing/encrypting a vote is that the public key must be known, which means that someone's vote cannot be anonymous.

I like the idea of a county/precinct/district being the entity who signs the results of a vote. This would protect the anonymity of individual voters, since results are reported on a per county basis today anyways. And if a county is suspected of voter fraud, you could always add their public key to a blacklist..

There should be a zeroth rule that an voting system intended to be used for government which would be that the average person is able to audit / understand the system. This was in an ACM magazine article.

It's a fun idea, but I would like to see you expand the article with a little more consideration around the edges. The core idea isn't really new.

Possible expansion topics:

- Handling a ballot with multiple separate races - Backup system for when the computers have gone down - Can it assist in any way with "voter suppression" issues

I don't like the idea that the encryption of the full list would be reversible. If I had contrarian views, I might be afraid to vote if that would put me on a list. Stick to one-way with a guardian to entry.

As a software engineer who knows very little about government security and voting security, can someone explain why you can't just build it like a regular web app (with very good security measures -- the usual HTTPS, database encryption, proper firewall rules to servers, etc.), and have the user enter their voter ID and social security and submit their vote via a web form?

From reading this article, it would seem that it satisfies the first 4 points, just not the 5th. Is that the main reason to have to use the blockchain, to prevent tampering from the inside?

The article is very slightly more nuanced but the conceptss the title purports is DANGEROUSLY INCOMPETENT for any security expert / discussion / context.

1) idea that security is something you check off and be done with is dangerously wrong. Security must be continuous, must be updated, reviewed, etc.

2) idea that you can "encrypt" [secure] your entire life is ludacris and leads to many dangerous security misconceptions. You don't even have control of your entire life, let alone ability to secure it. Most the data on you is owned by others and not even available to you to secure. The world is not private or secure. Everyone needs to know and think about this when they are tweeting, sexting, talking shit about future president and then being surprised when SS comes to investigate.

3) idea that security is either on/off, a binary, that you can be secure or not. Is False and leads to extremely poor security choices, over/under securing. Nothing is secure. There is not such thing as SECURE. Things lie on a gradient of security from easy to break to impractically difficult. Things on the impractical to break technically end are still broken due to social engineering, externalities (power consumption of cpu), poor practices surrounding item, etc. Security is making the effort required to get an item greater than the value of getting the item.

[Copied from my comment on a duplicate post -- there seems to be random tracking junk at the end of the URL that prevents these from being detected as duplicates!]

I appreciate how practical these tips are and I hope people will follow them.

I have two quarrels with this:

> Andy Grove was a Hungarian refugee who escaped communism [... and] encourages us to be paranoid.

I'm pretty sure that Grove was referring to business strategy, not communications security.

> Congratulationsyou can now use the internet with peace of mind that its virtually impossible for you to be tracked.

Something I've seen over and over again is that Tor users tend to have a poor understanding of what Tor protects and doesn't protect. The original Tor paper said that Tor (or any technology of its kind) can't protect you against someone who can see both sides of the connection -- including just their timing. Sometimes, some adversaries can see both sides of a person's connection. As The Grugq and others have documented, Tor users like Eldo Kim and Jeremy Hammond were caught by law enforcement because someone was monitoring the home and university networks from which they connected to Tor and saw that they used Tor at exactly the same time or times as the suspects did. (In Hammond's case, recurrently, confirming law enforcement's hypothesis about his identity; in Kim's case, only once, but apparently he was the only person at the university who used Tor at that specific time.)

As law enforcement has actually identified Tor users in these cases, I think people need to understand that Tor is not magic and it protects certain things and not other things. In fact, I helped to make a chart about this a few years ago:

This chart was meant to show why using HTTPS is important when you use Tor, but it also points to other possible attacks (including an end-to-end timing correlation attack, represented in the chart by NSA observing the connection at two different places on the network) because many people in the picture know something about what the user is doing.

I've been a fan of Tor for many years, but I think we have to do a lot better at communicating about its limitations.

Isn't 2FA considered dangerous now? We've seen how susceptible it can be to social engineering.

On a related note, I noticed that my Windows Phone displays text message notifications even when it's locked... So adding a PIN doesn't prevent an attacker from doing 2FA if they have access to my phone.

I really like this post. So much NLP research is 'locked away' in academic papers, and making the knowledge more accessible through posts like this is very important for large-scale adoption by non-academics.

Also, really well done on the site design. Love the graphics, font, layout and 'progress bar' animation at the top. Very nice UX overall.

I really loved reading this article but it's always so hard to figure out exactly how these things work out in detail.

I understand matrix multiplication but it seems that (some of) these matrix to vector calculations are actually trained by/as part of the neural net... but how exactly that works I can't figure out coming at it from articles like this.

So I took the 2016 numbers from this and used them to make this graph, using the same data but normalizing for github's growth - so its showing the % share of the overall instead of absolute #'s. It tells a different story:

You know what I like about this? Is that if you have an objective enough mind, you can see how awesome it is that all of these languages have so many open source projects. For people like myself (C# developer), it makes me realize I need to do more with OSS, and not just to be better than the next language.

I worked with MVC before, but eventually I found that DDD (Domain Driven Design) is what I'm looking for. Domain Driven Design is more like a set of rules of how to apply the existing design partners (eg: repository, factory and aggregation) and building blocks (eg: layering architecture) to design your business models and keep the integrity, invariance between data. Moreover, it lets you easily define the boundary between your services for Microservice architecture.

MVC was originally designed as a pattern for desktop UIs. It has a single controller and a single model, not the MVC style that Rails popularized with one controller class and one model class for every kind of data. In classic MVC, Views query the Model for relevant data. The Controller handles user actions and uses that to update the Model, then asks the View to redraw (preferably in some smart efficient manner). This is unidirectional data flow pur sang and it comes from 1988. How is React+Redux any different from this?

I agree with the author that components are the true innovation of React, because it encourages reusable building blocks by default. Contrast with classic desktop and mobile UI toolkits which, while often also using or encouraging MVC, do not require the developer to not subdivide their own codebase into reusable widgets. Instead, they allow composing an entire screen (window, page, form, whatever) from the built-in widgets. Making a reusable widget is possible but extra work and therefore not done. In React, it's the only way to go.

This is awesome about React, but it has nothing to do with data flow architecture, which is what MVC is.

The mistake webdevelopers made for years was trying to shoehorn DHH's backend remix of MVC into the frontend, throwing away decades of UI building architecture knowledge. I'm happy the Facebook people rediscovered MVC and I'm even happy they gave it a new name (Flux) because MVC frankly has gotten way too many definitions.

But saying that Flux/Redux killed MVC is like saying Clojure killed Lisp.

Its the same pattern. The whole point of MVC isn't the damn precise implementation its about separating out the concerns of the view, the services/data and a thing or some things that control and/or glue it all together.

The point is to not munge all these things into one monolithic horrible grim mess. As long as you are separating the concerns of showing something to a user, allowing them to control it and backing it all with potentially remote service who cares what the pattern is precisely called? Its still flavours of this original desire that we named MVC.

All these presenter/unidirectional patterns are _just_ the underlying desire of MVC and the only reason that people seem to talk about how "MVC isn't right" or "is dead" is because they've followed MVC like dogma instead of just a guideline of separating your presentation, control and service logic. I had exactly this debate back when everyone was talking about MVP as if it was some revolutionary new thing. Its not, its all the same thing and GOF was never supposed to be a template for software but a way of talking about specific ideas that architects could then riff on. They're chords, not one specific tune that you _must_ play in a very specific way.

> As more and more developers start to see the advantages of components and unidirectional architectures, the focus will be on building better tools and libraries that go down that path.

"Unidirectional architecture" is a weird name for what is a fairly standard abstraction. Every front-end at the top level is:

f(my_entire_state, some_event) -> my_entire_state'

In the end all you need are well defined state transitions, an event bus and a main loop to connect the two. Persist the sequence of events for undo/redo, audit, debugging ... and the current state for caching or having durability between sessions. Push side-effects to the boundaries. You may find this is good enough.

Front-end development has been plagued by unclear patterns w/ weird names (MVC, MVVM, MXYZ...) since forever; everytime the patterns are criticized you hear "you did not understand it"; and new names keep popping up. It seems the industry is stuck remixing reasoning around nouns, and is unable to step back and reason around data.

BONUS:Sprinkle some CSP to get elegant concurrency, throw away callback-hell.Sprinkle some React to get fast DOM manipulation, throw away Flux (heresy!) - it has way too many names to worry about (action creator, action, dispatcher, callbacks, stores, store events, views) and encourages some bad practices around use of stores.

At the moment I'm sold on Mode-View-Intent, which is more delarative and non-OOP. I also have the feeling it lets me compose a bit more easy.

DOMStream = view(model(intent(DOM))) DOMStream.subscribe(render)

intent() takes the DOM, wires up some interactions and returns streams "of" these interactions

model() takes streams of interactions, wires them up with data retrieval and mutation streams and returns these data-streams

view() takes the data-streams and uses them to create DOM mutations-stream, which it returns.

The nice thing is that these observable streams are really nice to filter, map, debounce etc. Also, it helps with fast realtime data stuff, because you can easily wire up these fast streams with a tiny part of your app and the rest of it won't even notice (which is a bit ugly if you got a big state-tree that represents your whole app state).

The not so nice thing is, that controlling them completely declaratively has a steep learning curve.

The whole "x is dead" is pretty common headline pattern. It's dramatic, extreme and often just an exaggeration.

I can guarantee that MVC is still used on the fronted as on the backend. People are still generating value from MVC apps.

Perhaps it's not held up as the holy grail, since the new holy grail is pronouncing it dead. Perhaps it is dying. Dead it is often not.

After building out some front-end MVC work, I can see the value in Flux architecture and React components. I'd have a hard time justifying a rewrite though. It's still very much alive there for me. That's also the case for many others.

I closely follow the Elm way (but in javascript with React), and it's still pretty much MVC.

Model = the state, Controller = the update function changing the state based on action. View = declarative, need to send actions to change the state. Gets redraw on state change.

Redux/flux are also similar.

The idea of MVC, as far as I'm concerned, is to separate the View (Declarative as much as possible), the State (Just data, no logic) and the Controller (Business logic receiving commands/actions/called/whatever which changes the states and let the view knows that it needs to update).

We've sort of evolved in our stack, which leans heavily on two less-popular options (MobX + Horizon), towards a pattern that I now think of as invaluable.

I started calling it Model-View-Store recently as I think that best describes it. There are a few unique things here that I think are valuable.

Starting with Models: Models all return observable values. So if I query for a single record I get back an observable object, or a list, observable array. I define `@query` decorators on the models to set up these queries. Model's include prop validation, normalization, consistent operation names, and more.

Views come in two types: application and presentational. App views are all decorated to automatically react to mobx observables, so you can literally have a Model `Car` and in a view call your query `Car.latest()`, and your view will trigger the query and react to it accordingly. One line model <-> view connections!

Then you have Stores: they are just logical containers for views. Any time a view needs to do more than some very simple logic for a view, you can attach a mobx store to it with very little code. Stores also can manage the data from the Model (and because the Model returns observables, this is usually a one-liner). But they don't have to. Stores are located side-by-side with the views they control (and are be passed down to sub views when needed).

I've been working on this system for a bit now along with our startup and we've been able to deliver some pretty incredible stuff very quickly. Having a consistent model system is crucial, I can't imagine programming without having prop validation and a single place to look for my model queries.

Going to be releasing pieces of it over coming weeks and hopefully a full stack example thats really legit soon.

I'm a bit confused by the way the article describes MVC as going from "server-side" to "front-end" :)

When I was growing up coding native UI apps, MVC was all about front-end. UI toolkits were traditionally MVC or M(V+C) going all the way back to SmallTalk, and "server-side" typically meant apps without "V" or a "V" that was so small and hardcoded in without separation...

Am I the only one thinking this kind of discussion is not hitting the nail on the head? I think the main issue with web UIs is the lack of good UI design tools, and nice and robust components. When I am developing a UI I am spending a lot of time writing code code while I just want to drag and drop components.

MV arises naturally. You can run the entire application logic without UI components, and you can test it that way also.

Other abstractions are designed similarly to allow tests against pieces of the internal API in total isolation, though separated on different lines. Maybe you have transient state stored in view models while persisted state is stored in models. The isolation only helps as size grows, and it can keep size under control mostly by having a good data model for what each component / layer / aspect actually does.

It makes it easier to rewire everything when you start with a switchboard. UI changes are either "Oh my god we're going to have to re-write so much stuff if we do it that way!" and you wind up not making drastic UI changes or "Sure, we can make that thing tickle the controller instead of that other thing."

For anyone interested in a surprisingly pleasant to use front-end MVC framework I am a huge fan of http://mithril.js.org/.

It espouses a (I believe?) slightly more traditional MVC interpretation where your Controllers are usually extremely light and in most cases completely optional. It encourages a MVVMC (Model View View-Model Controller) approach to encapsulate view-state.

OP is painting with an overly broad brush: Angular is not representative of all client-side MVC. I maintain an app where the view handles the browser events - as it should, and the only data that gets passed between the view and the controllers are the (business) models.

On android, MVC is dead. Talk about overly-circuitous code that is not straight forward to step through. MVP is great, when not over-done by folks as well (eg making a model-view-presenter unit recursively for every element on screen rather than having one unit associated with the a given screen).

I think there is a problem defining MVC. If you argue that current innovations in the frontend do not violate MVC, then MVC is a very useless term. If you think of an very conservative, OOP definition of MVC i don't think React fits the model. Of course that means that many other frameworks are not truly MVC, but MVC-inspired, but i don't have a problem with that.

Agreed. MVC has always been sort of an awkward pattern for frontend web development. The fact that the DOM is extrinsic to the JS runtime can lead to a messy collaboration of MVC components. An input that makes an AJAX request on each keystroke and renders the results to a list (typeahead.js for example) is easy to implement but involves quite a bit of indirection with this paradigm.

That being said, I don't know how to feel about Angular 2's approach to components. Decorators are useful but can diminish the benefits of component based architecture when misused.

I wouldn't say MVC is dead in the front-end, rather I would say MVC is a necessary stepping stone towards modern front-end architecture. It still has some kind of model, view and whatever the controller is called.

Before the time of Single page applications, MVC was not practical as the state of the page would be destroyed as soon as you click on a link. With the whole "single" page approach your page lives on and your applications remains. This was the considered the "holy grail". We are thought all the problems are solved, but then came the performance and memory leak issues. Because the page lives on throughout the user journey, memory management became a problem.. and so was SEO. As always front-end will continue to change rapidly year after year and it's now all about Flux/Redux and Universal javascript?

Saying MVC is dead is like saying "CPU" is dead, no it's not, but it will always keep on improving.

BeautifulSoup does not use or create "the DOM". It does convert HTML into a tree, but that tree is somewhat different from a browser's Document Object Model. For most screen-scraping purposes, this doesn't matter. But if the page uses Javascript to manipulate the DOM on page load, it does.

I have a tool for looking at a web page through BeautifulSoup. This reads the page from a server, parses it into a tree with BeautifulSoup using the HTML5 parser, discards all Javascript, makes all links absolute, and turns the tree back into HTML in UTF-8, properly indented. If you run a page through this and it still makes sense, scraping will probably work. If not, simple scraping won't work and you'll probably have to use a program-controlled browser that will execute JavaScript.

I feel like people are getting mad at the FBI for not pulling the trolley car lever in the right way, which is a valid thing to be mad about, but I believe the FBI made the right choice.

First, let's not rely too heavily the analogies with drugs or prostitution. The differences between CP and drugs / prostitution are too large to ignore anyway.

CP consumers are often producers as well. That's a factyou want CP, so you make some yourself and swap it with others to get more. This isn't universal but it's common enough that you should know about it. So the visitors to the CP web site are not all just consumers of CP but many of them are producers as well. This is relevant because you have to weigh the damage of distributing CP against the benefit of catching people who produce CP. People have stated that distribution revictimizes the children, but I would weigh that against the ability to catch people who were either producing their own or at least supporting other producers of CP.

So the FBI discovers this server, operates it for less than 30 days with a Tor exploit, and catches 200 people using the site. Yes, the FBI was complicit in the distribution of CP, but rephrased as a trolley car problem, this is basically like not pulling the lever, allowing the distribution to continue for a short time, and using that to catch 200 consumersand how many of them are producers? You can pull the lever now and stop the distribution of CP, or you can let the trolley barrel down the tracks for a short time and save all these people somewhere else.

(People are saying that the exploit may have done damage to other police investigations from other countriesI don't see any evidence that the exploit damaged the computer, merely that it leaked information about the computer.)

1. The crime that utterly dwarfs all others is involving children in the making of child porn.

2. After that, the crimes that dwarf all the rest are those that provide financial or practical support to child porn makers. Consuming child porn is generally regarded as one of those, and I'm fine with that categorization.

3. I'm sorry, but violating a victim's theoretical privacy by distributing the images a little further doesn't seem to be nearly as big a deal as helping to prevent the next live video of child porn from being made.

I'm usually regarded as being pro-privacy, but privacy is not something to be a rabid extremist about. Preventing physical sexual abuse of children, on the other hand, is a fine area for extremism.

This is less like a drug or prostitution sting where the mark is arrested before the contraband can be consumed, and more like a hired hitman sting where the victim is actually murdered.

From a moral point of view, Child pornography is de-ontologically wrong. Nothing can justify its existence. Even if such a sting managed to shut down the entire industry, it would be moot to attempt to argue for its moral goodness in consequentialist terms.

The FBI could have used other means to establish criminal intent in the visitors to the websites along with the fact that they had used Tor to search out and visit those websites in the first place. They could have made prospective viewers engage in a series of incriminating acts such as requiring them follow a series of links with the promise of finding the material, or making them refresh the page. There was no need to provide the actual offensive material in order to make a solid case.

> On August 4, all the sites hosted by Freedom Hosting some with no connection to child porn began serving an error message with hidden code embedded in the page. Security researchers dissected the code and found it exploited a security hole in Firefox to identify users of the Tor Browser Bundle [https://www.wired.com/2013/09/freedom-hosting-fbi/]

However, as far as we know, unlike the more recent Playpen thing, in the Freedom hosting case the FBI did not actually serve child pornography, they just displayed an error message. I don't see anything in this article that suggests otherwise.

I have no love for those who visit child porn on Tor, but in general I am now very wary of the FBI. I can't help but feel it's a powerful organization that's slowly turning into a dark oppressive one. The power grab from the CIA for the Petraeus affair. Using the sensitive nerve of terrorism to demand Apple unlock a phone. Throwing a last-minute wrench in the Clinton campaign. This is not going to end, especially under Trump.

> a Tor exploit of some kind to force the browser to return the users actual IP address, operating system, MAC address, and other data. As part of the operation that took down Playpen, the FBI was then able to identify and arrest the nearly 200 child porn suspects.

So, is getting someone arrested as easy as spoofing their network information and visiting those sites? I can already imagine trolls using this to have people swatted.

I think the clear differential here is that compromising the server and tracking its users while it was in operation by Freedom Hosting would perhaps be "OK" but confiscating the server, moving it to HQ, and then operating the site themselves is decidedly not.

Keep in mind, you can't just pause the site and expect your targets not to notice, they had to actively maintain the site (and consider what that means) to keep their targets coming back. It's disgusting and disturbing. And if it's what we know about it, it's also just the tip of the iceberg.

At least with Fast & Furious I think it was real criminals running the guns and just a failure to intervene. I think a failure to intervene here would be seen as unacceptable as well. But here we have way more than failure to intervene, they effectively provided the guns and helped run them across the border.

I understand the ban on child porn is justified via the interstate commerce clause:

Federal jurisdiction is implicated if the child pornography offense occurred in interstate or foreign commerce. This includes, for example, using the U.S. Mails or common carriers to transport child pornography across state or international borders. Additionally, federal jurisdiction almost always applies when the Internet is used to commit a child pornography violation. Even if the child pornography image itself did not traveled across state or international borders, federal law may be implicated if the materials, such as the computer used to download the image or the CD Rom used to store the image, originated or previously traveled in interstate or foreign commerce.

Sad for the artistic loss but also glad he died at peace after a rich life spent doing what he loved till the last moment. He joins a special list, alongside Hintjens who also passed recently, of those who manage to strip the dread from death and stress the importance of 'tidying up' over passive acceptance as one enters the final days.

The big change is the proximity to death, he said. I am a tidy kind of guy. I like to tie up the strings if I can. If I cant, also, thats O.K. But my natural thrust is to finish things that Ive begun.

For some odd reason, he went on, I have all my marbles, so far. I have many resources, some cultivated on a personal level, but circumstantial, too: my daughter and her children live downstairs, and my son lives two blocks down the street. So I am extremely blessed. I have an assistant who is devoted and skillful. I have a friend like Bob and another friend or two who make my life very rich. So in a certain sense Ive never had it better. . . . At a certain point, if you still have your marbles and are not faced with serious financial challenges, you have a chance to put your house in order. Its a clich, but its underestimated as an analgesic on all levels. Putting your house in order, if you can do it, is one of the most comforting activities, and the benefits of it are incalculable. [0]

"In a pursuit like rock n roll, which is entirely devoted to redemption, Cohens ideas were not only old but radical. His peers all insisted that salvation was at hand. To go to a Doors concert was to stare at the lithe messiah undressing on stage and believe that it was entirely possible to break on through to the other side. To see Cohen play was to gawk at an aging Jew telling you that life was hard and laced with sorrow but that if we love each other and fuck one another and have the mad courage to laugh even when the sun is clearly setting, well be just all right. To borrow a metaphor from a field never too far from Cohens heart, theology, Morrison, Hendrix, Joplin, and the rest were all good Christians, and they set themselves up as the redeemers who had to die for the sins of their fans. Cohen was a Jew, and like Jews he believed that salvation was nothing more than a lot of hard work and a small but sustainable reward."

I have cried more tears listening to Leonard Cohen than all the other tears I've cried combined, his music, his words, his poems have always resonated deeply within me. He truly is my favourite artist. We listened to him daily in my dad's house and I grew to find an incredibly amount of peace in his voice. Love the HN community seems to like him as much. rest well sir.

I took my mum to see him live in Brisbane back in 2013. At the end of the show he thanked his backing band. And then he thanked, by name, the sound engineers, the lighting operators, the cameraman filming for the tour DVD, and various other staff. One of the greatest musicians to have lived but also a genuine and decent person.

Interestingly, I was thinking about him on the subway just before surfacing to see above thread pop up. Very glad that I got to catch up on his bio (and to see him live a few years back) before his departure.

This is sad. I heard about this listening to the radio this morning while waiting for the school bus with my son. I introduced my (future) wife to his music and our wedding song was "Dance me to the end of love." I gave her the Matisse coffee-table book set to its lyrics (https://www.amazon.com/Dance-End-Love-Art-Poetry/dp/19321839...).

So sad. I saw him in Manhattan. My wife got tickets as a Christmas present, knowing I had been excited when we were traveling in Barcelona and he was there that week. (Couldn't get tickets -- didn't even try.) So we showed up at Madison Square Garden and I had no idea what we were seeing -- and there were no markings to give away the surprise! It was not until the show started that I knew it was a Leonard Cohen concert. It was an awesome evening.

If you want a lover I'll do anything you ask me to And if you want another kind of love I'll wear a mask for you If you want a partner, take my hand, or If you want to strike me down in anger Here I stand I'm your man

Easily one of the greatest songwriters of the generation. As Dylan said, As far as Im concerned, Leonard, youre Number 1. Im Number Zero.

His songs are dark and poetic and really keep you entranced. I'm glad he released his last effort (Leaving the Table is a great one for the occasion) and seemed totally at peace in his New Yorker feature.

There was recently a lovely interview with Leonard Cohen on Fresh Air [1], from which I learned that he had also spent some time a Zen monk. The article mentions this in passing, but there's more detail in the interview.

> While never abandoning Judaism, the Sabbath-observing songwriter attributed Buddhism to curbing the depressive episodes that had always plagued him.

A sad day. I'm a big fan, had the privilege of seeing him perform four times and I've named my daughter Suzanne.

I cant be sure, but back in 2008 when he played "Democracy is coming to the USA" he seemed to be delighted that Obama won. IMHO this part of the song is more appropriate for Leonard's last days on earth: "I love the country but I can't stand the scene".

About a week ago commented with a friend of mine his last album was absolutely wonderful and how awesome that he was 80 years old and still creating with the quality that he was. He was the musician that touched the most, others I sort of grew out of but always came back to Leonard - I suspect that there hasn't been a month in the last 15 years where I haven't listened to him. As an aside also discovered Irving Layton, a poet, through him.

RIP. Listening to his brand new release You Want It Darker (via Spotify HQ on my Audio Technica ADG1Xs, Soundblaster ZXR sound card). I can't get over the quality of the production and how utterly perfect his voice still is .. right until the very end. In light of recent events, this song is uncanny. What a masterpiece to finish a magnificent career.

I dont get why people are so emotional when famous artists die. Posting on facebook and whatnot..We werent personal friends with them, so it wont affect our lives in any way. Their works are still as available as ever, and still as great as ever. We can still listen to their music every day.

If they died old then they've had a good run to make a good body of great work that can be their direct legacy for hundreds of years. Few people achieve that.

It seems like so many of my favorite musicians and songwriters have passed in the last few years and its a struggle to figure out why. I'm a millenial with a wide taste in music from the early 20th century blues to contemporary EDM but it seems like the musicians whose talent you could just sense with every note and lyric are rapidly disappearing. I should be too young for this kind of cynicism but its an easy trap to fall into when comparing Dylan, Bowie, or Cohen to some song on the pop charts or an artist in the overwhelming field of independent musicians.

It's a sad day but I can't help but marvel at the universe. It is a kind of unique, rare beauty when a life-long artist like Bowie or Cohen close out their final chapter by releasing an album within weeks of their death.

RIP. Those looking to try Cohen for the first time (or wanting to rediscover him) should listen to Live in London (2009), which IMHO is one of the best live albums ever. Great songs and some witty banter in between.

Big fan of Leonard Cohen, big loss. I think his Isle of Wight performance is one of the greatest of all time considering what was going on the crowd and how he used his showmanship and calming music to turn things around:

If you haven't heard his songs, either the early folky ones or the post-80s electronic-ballads, definitely check them out. They are songs for grown-ups (he started his career as a singer around 34 years old after all).

Leonard's music had an uncanny sense of timing, both musical and cultural. He referenced the external, political world, indirectly - not through selfishly inward bullshit, like many of his contemporaries, but by sifting it through relationships with others and his relationship to the divine.

As I am writing this, the next article in hackernews is about Peter Thiel and his ascension to whatever office he is seeking in Trump's cabinet. His views on the damage women and minorities have done to Libertarianism (whatever that is), and how democracy is shit are well known, and I will let you judge how Palantir has benefited humanity.

The thing that gets me is his straight faced desire for immortality. Note that he doesn't wish for immortality for someone who is great, he wishes it for himself.

Maybe unjustified, but I'm always kind of suspicious of a post which is largely focused on raw data (like this one) and which presents only pie-chart visualsations [0].

Yes, Python has a pretty dirty history, with many people choosing to stick to the Python 2.7 that they knew and loved. And yes, commercial software tends to move waaay slower than the wider community (many banks are still running COBOL). If you're focused on making money and pleasing clients then "it worked for us before" is always going to be the strongest argument.

Major players in the Python eco system have pledged to move away from Python 2 [1], and if we had non pie-chart visualisations, I'm sure we'd see huge trends towards Python 3 in the last year. Even slow-moving corporations are starting to use Python3. Yes, MacOS defaulting to Python2 is still a problem, but Ubuntu switching to default Python3 is already a huge step to get companies to move forwards.

Wouldn't call 28% "largely ignored", rather "still lagging behind". Thing is, 3.6 looks like the real deal breaker here, what Python 3.0 should've been. With asyncio, compact dicts and scandir I've gotten to the point where I can almost justify to my employer why, if everything is still working, we should move to Python 3.6 ("all will be faster" -- I know I'm kind oversimplifying here, but my boss pretty much doesn't understand software and just wants results/money).

There's also the sentiment of "I don't know if [insert module here] will be supported", which has become mostly baseless fear, but people still think Py3 support is lacking (when it's not![0]).

Edit: when I posted this comment, the link was titled "Python 3 largely ignored" (not the article per se, but the submitted link had been titled that way). It has been changed, but this was a bit of important context for my comment.

Former Python developer here. The problem was two-fold: Python 3 broke backwards- and forward-compatibility (in a practical sense: there were workarounds, but libraries didn't necessarily use 'em, and Python is nothing if not a glue language); and for the longest time there really wasn't a good reason to switch. I think it wasn't until last year that it really started to make sense to go with Python 3 and by then I'd already switched to Go.

I don't follow the Python community closely, so I might be totally off the mark here, but my general impression is that it was shipped prematurely. From what I've read, in more recent years a lot of work has gone into easing the transition... But it seems a bit late. As an outside I wonder: did they fail to put in a reasonable effort at all in providing a smooth transition from the start, or did they try and failed to account to the realities of our world?

Recklessly breaking backwards compatibility without a smooth migration plan is hostile to developers. Although it's not a programming language, this is something that I think React has gotten right with their current versioning scheme [0].

I find it funny how a small group of people thinks they know better than the outer community, to the point that they feel like they should have a say in what thousands of businesses use to run successful code.

More than this, I would argue that most people using Python 3 are those new to the language. This is only from personal experience, so it's really just anecdote.

PS: In a kind of jesty side-note, I know the general argument is that "python 2 was broken", but really, how broken can something be when thousands of businesses depend on it and more than that, choose to keep using it when a "better" alternative comes about?

Im not sure why but the Mac has not installed a "/usr/bin/python3" so I continue to avoid 3.x migration (aside from minor things like "__future__" imports for division and print_function).

I dont want to add a dependency for end users when my current code works fine with the built-in Python at 2.x. Nor do I want to bloat the size of downloads to embed something like an entire interpreter binary. (Swift has the same issue; there is currently no way to refer to it externally, as it is in flux; I do not want to embed loads of Swift libraries in every build so I will wait to migrate until they have stabilized binaries and made them available from a common root.)

this headline is a bit much. 1/3rd of new projects use Python 3! A lot of people are excited about the language now that a lot of the warts have been dealt with and language compat is higher than ever.

Though all it takes is one dependency to throw a wrench in the plans, the major projects are Py3 now.

Though porting large python 2 projects is still a huuuuuge pain. This is more a result of python 2 badness than python 3. But a lot of work.

Anecdotally, we had a simple events API running Python 2.7/Flask/uwsgi/nginx that needed to do quick I/O operations on a growing # of HTTP requests. To increase throughout, we experimented with Python 3 for its async/await style concurrency. It didn't seem to help much and we ended up just rewriting that API in Go and see way better throughout-to-resource ratios by shipping 5mb binaries to Docker & Kubernetes.

Point is, we're a hybrid shop that does all first pass services in Python 2.7 and then move them to Go when they become suffiently trafficked and/or critical.

I blame in part, that the default install for OS's like Ubuntu and Mac is Python 2.7. If you are starting python development, and want to support more systems, and you know the default is 2.7, then that's what you will target.

Coming from a country with an extended alphabet, I've always been aware of encoding issues, especially before almost everyone settled on using UTF. So the treatment in Python 3 (and equally in Go) is a game changer for me and many other language users out there. I'm not saying that, say, American developers don't need to support other alphabets, just that it's less pressing since they don't tend to deal with them on a daily basis.

My colleagues at academia (scientific computing) mostly default to Python 2. Personally I'll switch to Python 3 when approximately over 50% of my colleagues use it or when some mind blowing new feature or library is introduced only for Python 3. Makes things simpler this way. Also, I don't currently have enough spare time to port all my code.

I've recently started using python a lot more, it was a natural progression when I moved from using a windows dev machine to a mac book. One day I needed to do something quick, and python was the quickest way to do it. I typed python into my terminal, saw 2.7, and built from that. I had no incentive to look into upgrading, I had everything I needed to accomplish what I wanted. My good experience then has grown to the point where it's probably the second most common tool in my toolbag, but it's still an "I need to do xyz, and I don't care how it looks I just want to see if this will work". So I still stick with the defaults. I only think about using the latest greatest tools if I think the code I'm writing has long term potential.

As languages go, Python 3 is pretty successful. It unfortunately shares a name with the language it is a variant of so people tend to compare the popularity of the two. The popularity of Python 3 should be judged against all similar languages, not just the insanely popular Python 2.

Commercial projects are the laggards. I'd expect them to be the last to move over. More interesting I think is the progress in distributions. This matters because commercial projects use distributions too, and will need to follow them in the end.

Let this (and Perl 6) be a lesson to language designers: do not make backward-incompatible changes, ever. It cripples adoption. Java managed this much better by being able to mark APIs as "deprecated".

That's exactly what I like about it. Stuff the "user experience": I don't want an AV product that tries to run my life for me. (I don't want Windows 10 to do it either, which is why I tried it for less than a week and went back to Windows 7.) AV products are bloated, difficult to use and always in your face when they should just silently remove viruses. Which is what MSE does for me.

I've felt like AV software is often worse than the viruses: intrusive, slow, ineffective, getting in the way, and not once detecting anything.

Pretty much all of my "family tech support" is related to the AV doing something stupid like auto-deleting cookies or flashing up big scary messages for something trivial.

However Windows Defender seems to be good for me on Win10 - it just sits there out of the way, I don't even know its running. I LIKE the fact it doesn't have "online protection" or password managers or parental controls or whatever. It feels lightweight and does not cause everything to become 3x or 4x (or worse!) slower like every other AV software I've encountered

Whenever I go to perform family tech support I remove any random AV software they've been tricked into installing and just leave Windows Defender and that usually solves the issues (obviously making sure they are up-to-date on patches & still using 2FA)

Microsoft has a problem - poor 3rd party software and drivers make Windows extremely unreliable. Microsoft takes the blame, and Windows looks unreliable. Apple takes a different tack, they simply lock down their APIs and ecosystem to avoid this. Is Microsoft trying to go the Apple route - but maintain some openness? Giving 3rd parties core low-level APIs is ripe for chaos.

I had to install Kaspersky on my main laptop since some VPN software imposed a policy that it installed and up-to-date to connect to a contractor's secured network. It was absolutely terrible. It killed my battery, slowed my machine, killed my TCP stack at one point, interfered all the time, and became generally unbearable. It frustrated me so much, I now do all network operations via a secured VM to avoid the Kaspersky curse on my main work machine.

When I was doing a lot of MacOSX kernel / driver work 8-10 years ago and keeping up with all the darwin lists, we'd get tons of questions from A/V devs porting their software from Windows to Mac. There were all kinds of bad questions. The worst one I remember is somebody asking why they were not allowed to hold a kernel mutex across notifying a kernel-space A/V deamon & waiting for it to respond (deadlock?).

After seeing multiple questions like this from these folks, I resolved to never run a 3rd party A/V suite again, and have run nothing but vendor provided A/V.

It's the dirty speaking of the "poorly washed". Kaspersky is said to faked malware to harm rivals (http://www.reuters.com/article/us-kaspersky-rivals-idUSKCN0Q...). TrendMicro allowed remote command execution on the user machine (https://bugs.chromium.org/p/project-zero/issues/detail?id=69...). Probably Microsoft is right, your PC may be better protected without a AV, using an AD block instead for example. Probably not (I'm not a specialist in the low level stuff anyways). But the point is that MS is fighting with tools it has (in an ugly way). But it's widespread. Apple enforces Safari on iPhones for example. Amazon explores bias in its marketplace (http://dealnews.com/features/Are-You-Really-Seeing-the-Best-...). Kind complicated, but that's how the world is. Doing all for a higher profit. Because of that I'm sure that Kaspersky would do the same if they were in their position. Perhaps a little different, but taking advantage of its size to increase their profit for sure.

A friend of mine runs a 3-person software company making desktop Windows software. Nothing terribly exciting, think - a ToDo list or similar. They put nothing in the kernel, stick to documented API, make no deep tie-ins into the system (e.g. Windows Explorer extensions). Just a perfectly simple standalone piece of software with minimal dependencies that can run even on XP.

Not two months ago they started getting reports that the software was disappearing from users' machines. The Start menu icon was still there, as was the Uninstall entry, but the EXE was nowhere to be found. Naturally they thought of the antiviruses, but there was no pattern. Fast forward two weeks and the only commonality between all reports was a freshly installed Windows 10 update. The update silently wiped their software off. And to understand why that happened or to file a "false positive" report with Microsoft, the only option was to cough up few hundred dollars for opening a "priority support" ticket with them. Not everyone was affected, just a fraction of a percent. You could still reinstall the software and Windows won't make a peep or complain in any way.

While it made very little sense, it still clearly showed that users were no longer in control of their machines. Moreover, Microsoft outright lied when they said "all your files and apps will remain where they are" while installing an update.

So it's not just about loosing control over your own computer, but it's also about being treated like a sheep that Microsoft owes no explanations to and can do what the hell it wants. I sure hope Kaspersky Labs will have enough rage, funds and patience to drag Microsoft through courts and whip it back in place.

I think this article is mixing up three different things which is counterproductive when you're trying to convince people. Just go to the point (not the pointS).

1. Defender is not the best AV out there from a strict efficiency perspective (IMHO, Defender is good enough for most people and is quiet enough & bloat less enough compared to a lot of the competition).

Antivirus and firewall are two apps that I expect come with Windows, so as a consumer who actually paid for Windows 10 I don't care if Kaspersky is whining about this.

As messy as av/fw are on Windows 10, let's not forget how things were before in the bad old days; security products were sometimes as bad as the malware they claimed to protect you from. Remember when you helped family and friends and how Norton was so difficult to remove it required a dedicated removal tool? Remember the countless of cleaners that used all kinds of scummy advertising techniques to trick users into installing them, often decreasing performance and safety?

As the "computer guy" for a lot of people, I'm glad that AV+FW are included in Windows 10. I am, however, disappointed how sub par they perform and how user hostile they are.

On Windows 10, the firewall is completely opaque and Microsoft decided to remove the firewall icon from the tray. So users naturally don't know if it's installed or not or what it's doing. Also, it's buggy as hell because on more than one computer I've had to keep resetting it to defaults simply because it would regularly stop ALL outgoing connections. Took some time to figure that one out and for most casual users that would have been impossible to solve, especially since there is no freaking firewall icon to click on anymore.

The antivirus has a more visible and sane presence but performs poorly in the independent AV tests. For some reason it changes names more often than a porn star, further confusing users. The blog post fails to mention Microsoft Defender, the fifth incarnation of the AV on Windows 10, so there are five different AV that Microsoft offers/has offered.

Microsoft needs to improve the quality of their built-in security products, both how successful they are at protecting users but also the overall usability experience.

I don't get the MS strategy. They became popular because they made it easy for developers to build staff for their platform. But then they started morphing their platform into some kafkaesque Labyrinth of new hip and then soon to be retired libs/frameworks. If a developer is brave enough to master this he will then be disappointed trying to monetize it. The Window store (or whatever the current name is) is like a combination of itunes and the play store, but only the downsides mixed together.

I believe that when I am using Windows, my interests are more closely aligned with Microsoft's interests than Kaspersky's. That's why I stopped using Kaspersky in favor of Microsoft's built in security product...and similarly why I stopped using Norton AntiVirus in favor of Kaspersky a few years before that.

To put it another way, Kaspersky's business, like many in the Windows ecosystem is to AdWords or bloatware their way to rents extraction while free alternatives exist. I'm ok with Microsoft making that model obsolescent and Kaspersky adapting or dying because Kaspersky's argument isn't that it provides significantly better anti-virus protection.

I have installed Windows 10 on two systems. One has MSE enabled and the other has Eset Nod32. Nod32 was installed on that machine under Windows 7 and after converting the OS to Windows 10 it continued to work. Windows 10 is now facilitating notifications that the Eset license needs to be renewed, but

IT IS NOT TRYING TO TRICK ME INTO USING MSE.

Also, back in the day I had an HP laptop with an AMD Duron processor, and it came with Symantec AV. I had overtemp shutdowns. I diagnosed that the AV was using most of the CPU cycles by far. So I researched the providers and somehow Nod32 came out on top across two or three different AV shootouts. I replaced Symantec with Nod32 and the laptop ran so much better. After that I only ran bundled AV on new machines until I could get around to installing Nod32. Nod32 continues to behave appropriately.

On the machine that runs MSE instead of Nod32, there was a different application chewing up the CPU cycles: The HP support assistant.

System security should be built-in, not bolted on by snake oil vendors looking to make a buck. Nothing against kaspersky, I like them, but I'm with Microsoft on this. Games, browsers and other add-ons are higher-layer applications where competition makes sense.

If MS really wanted to make system security an even playing field where vendors could actually be effective, they'd make it modular (like Linux's LSM) so that admins could easily swap out security solutions without busting the system (slow, bloated, ineffective, etc.).

Vendors are a large part of the problem. They want more money, more often and in many cases really harm performance and do little to protect the system.

I get their point and Kaspersky is pretty good. However Antivirus products have typically been utter and complete crap, slowing down computers a ridiculous amount and to non technical people it's just "their computer is slow, oh look how fast this mac is".

My father has three freaking antivirus/antimalware solutions installed. Maybe defender could be better, but if it reduces the market share of the nortons, comodos etc then I'm all for it.

I ran into this when the Windows 10 Anniversary Update rolled out. In my case the program Microsoft uninstalled was a Start Menu replacement, so I didn't actually have a functional start menu for several hours after the upgrade until I got the updated version of the 3rd-party program installed.

This left me shocked, dumbfounded, speechless, and furious. Everything I've observed over the last 20 years says Microsoft honours backwards compatibility above all else. Raymond Chen has great blog posts about the huge efforts they used to go through. My understanding is that's why businesses have stuck with Windows; it'll keep running their 10, 15 ,20 year old legacy VB line-of-business apps even on their newest OS. Apparently Microsoft has now decided to throw out backwards compatibility? I don't understand this decision.

I think that the unfortunate truth (well IMO) is that if you want security and/or privacy then Microsoft is not the company for you. They have shown many times to be in bed with the NSA (http://techrights.org/wiki/index.php/Microsoft_and_the_NSA) and I'm sure other spy agencies and are becoming less open and friendly towards developers and it's users. It is bad that these decisions are effecting businesses but we can all make the change by moving towards more open operating systems and companies that give users and businesses back their freedom, privacy and security.

With the recent Humble Bundle deal I have tried again one of these antivirus products (Bitdefender Antivirus Plus) after only using Windows Defender/Security Essentials since the Windows 7 days.

Right after installing it I noticed that I MITMed myself with their "Web Protection" feature. To show green check marks next to my Google search results this "security" software intercepts my TLS traffic and alters it without my consent.At least Microsoft's solution isn't that desperate to make itself noticed even at the expense of my network stack's integrity.

This is my main issue with the "security" industry for Windows. To justify their existence they have to remind their paying users all the time about their involvement and sometimes they use really stupid and dangerous methods to achieve this.

What jumped out at me was that the 'Compatibility Assistant' actually removed a program (in the screen capture SmartFTP); that's removed, not disabled. Disabling a program which has compatibility issues may be a reasonable action, removal, not so much

IANAL, but that seems at the least to be a bloody annoying action, and at the most, anti-competitive as well as anti-consumer.

Geez, Microsoft keep shooting themselves in the foot. I feel done with Apple and I'm so ready to switch after the MacBook Pro 2016 dongle debacle & the glorious Surface Studio... but then I read things like this and see Windows uninstalling software without the user's permission (SmartFTP is the Windows FTP client I would use!) and realize that I just can't switch to Windows even if I want to.

Yesterday I fired up Windows 10 in a VM on my MacBook to get some development work done, only to find Windows go straight into installing updates while I'm on battery in a cafe & without my power cord. (But it insists "Don't Turn Off Your PC".) 90 minutes later (!) Windows finally launched... just as I had to run for my train home. I literally couldn't do my work that afternoon, all because of Windows.

I have been running MBAM (free version, scan only)/MSE for years without ANY issues, not a single virus on any of my windows machine.

I can't understand the need for bloatware aka "anti-virus", if you take the time to educate the users and train him to stop clicking and installing whatever pops up in their screen then they can pretty much rely on MSE and have a clear mind.

Obviously MSE might not detect EVERYTHING but basic education on how to treat spam/advertisement/phishing goes a long way.

Just looking at the headline(without the domain), and recent events, I thought this was going to be about Teams vs Slack.

The reality is that an organization as big and as talented as Microsoft could, if they put their mind to it, develop and release a software product in virtually any market covered by their ISVs, and unless it is really terrible, or the third-party tool is really good, displace it.

After decades of providing us insecure software, are we supposed to blame Microsoft for doing the right thing & getting things _almost_ right?

I have not yet found an Antivirus software which can truly educates the user - there are wonderful opportunities in there for the right kind of company/product. Proactive solutions beat reactive solutions hands down. Like they say "Stitch in time saves nine"

In case microsoft is listening - please expose a knob to the user allowing them to make the file modification hooks be no-ops. There are times when I am doing critical things where I dont want the file modified callbacks to AV viruses / other spyware to be invoked.

Pretty bad practices by Microsoft and sounds like that has a decent chance of costing them money in the EU.

However I think the point that having one monopoly AV decreases security because the bad guys can adapt to it is at least not as clear cut as it seems. Especially compared to the scenario of someone having multiple AV programs installed. AV programs themselves are excellent attack vectors, especially for the more skilled attackers so reducing the number has at least some theoretical benefit.

From my experience AV software makes life of small developers targeting Windows platform harder, so an argument can be made that Microsoft is actually helping independent developers by improving installation and update experience.

We often need to deal with user problems because the installation or update process was blocked by AV software without any user visible message. Also often an application is incredibly slow for some period after the installation because AV is doing some additional scanning/blocking (again the user is not informed about this and blames the application).

Good read. Makes me glad I haven't made the switch to win10 yet. And yes windows defender is horrible, the other day it just decided that all .lnk shortcuts to browsers were infact malware (Even if it was just an ordinary shortcut)... Anyone else experienced this with windows defender?

> Microsoft has even limited the possibility of independent developers to warn users about their licenses expiring in the first three days after expiration ... this is the crucial period during which a significant number of users seek extensions of their security software licenses.

So it's about profit, because the AV companies lose out in their historically most lucrative period to keep paying users.

this is why i don't trust microsoft azure. I recently had to watch a marketing guy trying to sell me on azure, every second slide had a big, bold OPEN sign on the upper right. An truly open company does not have the need to stress every second slide that they are truly open. I tried using the service-bus (it was not my choice) and stumbled up on https://github.com/Azure/azure-sdk-for-java/issues/465, it is open since february! Node.js and the Rest-API were not working either and i could not use the c# library from my mac since important DLLs were missing.

It was a scary experience and it will take some time until azure will gain my trust. What would help is entangling microsoft and azure into a s structure like google has done with alphabet. With the current structure clashes of interests are inevitable.

While he is on the point on Microsoft's general anti competitive behaviour Antivirus publishers behave the same.you want an ANTI-VIRUS, but then you get continually bombarded with reminders that you are missing a Firewall, VPN, Password manager, and everything else they can think off.

Microsoft is creating walled garden and has all right to do that... for some sectors. The most problematic for me is public sector (gov) because they are forced to use Microsoft products by their own agenda, others are free to choose and I believe they do that...

Security fixes and improvements should be made at the OS level. And it is: Microsoft, Apple and Linux receive fixes very quickly. No software editor will be able to do better than the OS to fix and stop threats.

I stopped using AV softwares a long time ago for the following reasons:

- It slows down your device (memory, cpu, disk access, etc.).

- It annoys you a lot more than it stops or solves any security concern. I've yet to hear from someone telling me their AV software saved them from an actual real virus... If this ever happens it's probably a damn advanced attack that even the AV software doesn't know about.

- It's extremely hard to remove, especially when pre-installed as a bloatware on a PC. Sometimes it's also installed as an extension of other software (browser, etc.).

- It usually takes wrong decisions (false positive) that lead to broken web pages, legitimate software that stops working, etc. And unfortunately the "standard" user has no way to figure out it's due to the AV. I can't count the number of times I had to work with my customers on figuring out what was making my website or software not run (or even not to install) on their machine.One time I had to write to an AV editor in order for my browser extension to be whitelisted. Never got any answer...

AV softwares can be easily replaced with common sense and a set of very simple rules.

- Have a hardware/software firewall that blocks everything expect what's required (allowing only web when initiated from the machine is enough in 99% of the cases). Every major OS now comes pre-configured with a software firewall which removes 90% of the threats.

- Use a strong email service or software (gmail, etc.). This way you reduce the likelihood that a virus, spam, or fishing email passes through.

- Don't open email attachments coming from unknown or non trusted senders. Even when the sender seems legitimate, double check that the email makes sense (not an unusual behavior), pay close attention to URLs, written language and words. Don't click links without knowing where it goes (domain name, https, etc.). Email remains the most simple way to install a virus or a trojan on someone's computer so be very very attentive when acting upon an email. If you use an email provider (like gmail), report the spam or phishing attack very quickly so that 1/it can be stopped quickly for others and 2/it teaches the Machine learning to do better next time.

15 years I've been applying these rules and I never got any virus without using any AV software. My devices run like a charm (PC or Mac).

While I'm a big defender of freedom and open source, I can easily understand and forgive proprietary OS providers choices with regard to the AV editors.

Quite frankly, I think the continuous consumer abuse by the windows antivirus vendors is something that Microsoft is listening to here, and making it much harder for questionably efficacious software to put you into a perpetual license loop primarily fueled by scare tactics.

Sweeney had an argument, and one that I think Microsoft is trying to address. Anti-virus software (including McAfee and Kaspersky) is responsible for so many daily fuckups in my corporate computing experience that I am aggressively removing it from every computer I can find, and I tell everyone I can do to the same.

It is good that Microsoft is making them justify their existence, use less deceptive re-subscription tactics, and in general providing very stiff competition for them. In this specific case, it not monopoly tactics, this is pro-consumer competition.

I hope people realize this, because I think most windows users read this and then immediately squinted and said, "Kaspersky, huh?" It took me over an hour to scrape that gunk out of the last windows 7 box I set up for my family, and I was happy Win10 kicked it to the curb for me on the upgrade I just helped with.

Why does Microsoft even allow trial installations of all of these sorts of things? Its cut-and-dried user-hostile behaviour, as are bundled installers as a class. Microsoft has the power to kill these pieces of software. I wish they would.

I hate how intrusive the Windows Defender is, it automatically deleted some executables of mine I know to be clean (false positives). Just disable that beast entirely, seriously (group policy or regedit). Makes your PC much snappier. Defender is my biggest gripe with Windows 10.

Having anti-viruses installed is for fools. I just upload every single executable to VirusTotal.com and make sure I know the source I downloaded it from - this is far superior to any anti-virus and doesn't slow down your PC.

I said this when Windows 10 was new and I got tons of downvotes. I say it again because it still holds true and needs to be said.

I don't hear anybody complaining about Apple's anti-competitive ban on AV for iOS.

Oh, wait, that's because iOS is orders of magnitude more secure than Windows and doesn't really need an AV product. Whereas Windows has been plagued by malware for decades. Nobody wants to buy AV in the same way that nobody wants to buy health insurance; it's an unfortunate necesssity in an imperfect world.

Unfortunately the tradeoff we're facing here is the "information feudalism" one. People aren't realistically able to secure themselves, so they end up having to pick a quasi-monopolist and delegate to them the ability to ban software. Such bans can be extremely arbitrary. Occasionally even your headphone jack gets taken away. But people put up with it because it works for them in a way that anarchy doesn't.

Microsoft would clearly love to make Windows behave like iOS: apps only installable from the store which has power of veto and takes a cut. Heck, Apple would probably like to do that with OSX. Neither has quite managed it yet.

I suspect the long term way out of this is a proper user-owned subscription-driven open hardware company, but that's a very hard thing to build and a hard sell to the average user.

Globalization (and corporatization) creates money vacuums, and dead spots in the places that they replace.

Walmart/Amazon/Etc vacuum money and resources from the local (or "local-ish") community to their respective profit centers. It used to be small towns would be relatively self sustaining and much of the money staying within local boundaries, and while the cost of goods was higher at least buying them would support local businesses.

When a factory leaves a town or a company goes under from overseas competition it creates a big gaping hole in the local economy that frequently doesn't end up being filled. There are no companies coming back, and no capital available in the area anymore since all profits are sucked up.

It was beautiful vision for corporations, get cheap workforce overseas. Invest billions into China and then wait for those billions to get back to USA. It was true in the beginning because China imported a lot of tech and goods. They didn't have their own tech. Everything would work great but Chinese were not so stupid, they thought we can create our own products with same quality with our own tech with money we got from west. So they built universities and took best minds from west to teach in those schools. Then they started to make their own products that started to compete with those that need mid-high tech knowledge to produce. At this point globalization stopped being golden egg of western countries economies. That's why you see decline in growth in high developed countries in Europe and USA.

The reason for the "productivity" decline may simply be that manufacturing and agriculture don't have many employees any more. Only about 10% of the US workforce is in agriculture, manufacturing, and mining. Those are the areas where, historically, there's been huge productivity growth. That's how that number got down to 10%. Yet US manufacturing output is at an all time high.

"Productivity" at the macroeconomic reporting level is total output / total workers. So productivity increases in manufacturing are divided by the total workforce size, which dilutes them.

Job growth is in the areas where productivity is low, such as health care and education.

Does anyone actually think globalism is going to be stopped? My guess is it will be harnessed. Corporations fled the US to increase profits - both to preserve taxes and ignore environmental regulations. Given no regulation to prevent/dissuade the exodus, they outcompeted the ones who didn't.

It would now take a very large effort that would look like protectionism to bring all that back. Can Trump administration pull that off?

It makes me very glad that we are starting to question globalization. Whether or not it is a bad thing --- and it may very well be a good thing --- is still an open question in my mind. But for decades now it seems that globalization has been very unjustly considered good by default.

At Convox we have been running Docker in prod for 18 months successfully.

The secrets?

1. Don't DIY. Building a custom deployment system with any tech (Docker, Kubernetes, Ansible, Packer, etc) is a challenge. All the small problems add up to one big burden on you. 6 months later you look back at a lot of wasted time...

2. Don't use all of Docker. Images, containers and the logging drivers are all simple great. Volumes, networks and orchestration are complex.

3. Use services. Using VPC is far simpler than Docker networking. Using ECS is much easier than maintaining your own etcd or Swarm cluster. Using Cloudwatch Logs is cheaper and more reliable than deploying a logging contraption into your cluster. Use a DB service like RDS is far far easier than building your own reliable data layer.

Again thanks for sharing your experience as a cautionary tale.

If you are starting a new business you should not take on building a deployment system as part of the challenge.

Use a well-built and peer reviewed platform like Heroku, Elastic Beanstalk or Convox.

A lot of this was the motivation for writing the book we wrote on Docker in Practice [1]. The reality of implementing such a technology requires a combination of improvisation, technical nous, bluffing, and a willingness to work through the inevitable problems [2].

I've talked about the relative immaturity of Docker as a used system (outside of dev) [3] and am struck often by how rarely people understand that it's still a work in progress, albeit one that can massively transform your business. The hype works.

That said, Docker can work fantastically in production, but you need to understand its limits and start small.

Reading these articles and comment is fun, we are a very large organization and starting to use docker, and we love making things even more complex than other folks, so I can tell this is going to a big mess. Fundamentally we (as in this industry) seem to love simple ideas that become massively complex.

Not being a fan of docker but "Since images sizes can be as high as few GB, its easy to run out of disk space. This is another problem with docker which you have to figure out yourself. Despite the fact that everyone whos ever used docker seriously has to come across this issue sooner or later; no one tells you about this at the outset" -monitoring free disk space is something you definitely want to do regardless of using docker or not.

I'm a big Docker fan, but this is interesting to me mostly because it shows how it's hard to get started, particularly the opening few sentences about the paucity of documentation for helping you get going from scratch. There have been good guides, but they go out of date quickly! There's so much change (because it's still very much under development) that it's hard for someone coming to it fresh to figure out what's up-to-date and what isn't. For those of us working in container tech and tools this is a good lesson about making sure the entry curve isn't too steep.

As someone starting considering Docker (and possibly Swarm), these seem to be pretty serious criticisms. Any experiences to corroborate / counter these two posts? Going by what's written here it would be suicide to use Docker, but many people are...

> Orchestration across multiple physical server gets even more nasty where youd have to use something like Swarm. Weve since realised that swarm is one of the lesser preferred options for orchestrating clusters.

Could you elaborate on this? Did you settle on an orchestrator, and if so, which one?

If anyone is looking into orchestration systems, I can't speak highly enough of Kontena (https://github.com/kontena/kontena). While it is still in the early stages of development, it is a great "small-mid level" platform with a ton of features. The config and concepts are also easy to wrap your head around. We chose Kontena as Swarm is still unstable (imo not "production-ready" but I've seen counter-anecdotes) and Kubernetes was lacking some features we required. The devs have been super helpful with any problems we've had in deployment as well.

This article reflects our transition experience with Docker, it took us two years to finally feel comfortable (old chaps from decades of corp dev), but end result is very positive and rewarding. It is fair to say that this should be expected for any infrastructure migration, there is nothing wrong with being slow and careful as long as we are moving forward. Based on experience from our team, the problem has never been finding help/answers, problem is we were facing, one one hand, an encyclopedia of single page documentation like Dockerfile/docker run command, while practical guidances and gotchas are scattered around rest of teh Internet, blended with personal and business specific opinions. It is hard but eventually we figured the best way to work with containers from git to build server to deployment, that fits well with our productivity workflow;, as well as where to not use docker.

There's a lot of "figured it out ourselves" without actually sharing what they figured out. It's like this post is a whole lot of nothing: we know docker in production is not easy but how about telling us more details

I agree with most of the points here, but some stuff is misleading. For example the "you have to rebuild after everything that code change" is ridiculous. Just use "COPY SRC/ /app" In your dockerfile, and in dev mount SRC/ as a volume over /app. There, hot reloading sorted for development.

Don't get me wrong, docker is one of the most frustrating technologies I've used (partly because it shows such promise), but a lot of the problems he describes can be sorted with the most cursory Google.

Why is running a db in a container with the data directory mounted from the host is such a bad idea? Dropping the container on a new host with a copy of the data directory looks easier than having to install the db from scratch, even with automatic tools. Are there performance penalties at run time?

Kind of very basic issues IMO. The only one that's hard is logging. We are using K8s on AWS and I haven't found a good solution for logging centralization. Personally I don't like the Kibana interface for that kind of stuff.

A lot of the issue I see him describe in production are fixed by Kubernetes. Compose works fine for local dev orchestration but pattern doesnt work for deployment. The ideal world would be that I can run my compose file on my cloud provider, but Swarm isnt there yet. I have to rewrite my compose file using kubernetes configs its not a 1:1 mapping but the high level connection are there if you think of Cabernets Pods as Docker Containers. He mentions orchestration across a cluster with Swarm is nasty, but its elegant w/ Kubernetes.

Docker Registry:

Obviously, there is no constraint permitting him to use a 3rd party service. Why to let Google Container Engine (GKE) or AWS ECR handle it for you?

Longer build times:

I think this is really where he is missing the mark. It sounds like he has a fundamental misunderstand that if you mount the source code in dev you have to do it in prod too and that you have to have 1 container. Not true: You can mount the source code in dev using compose, so you dont have to rebuild every time you change a line. Also, I think its a pattern in docker to try to keep your containers as atomic units of your app architecture. It sounds like they are trying to bake all competent of their app into 1 container (app + db + service, etc). Just break them up into containers, link them up w/ compose. This architecture them translates cleanly to one of the cloud providers for production: GKE or AWS.

DB and Persistence:

Yes, I think it is very clear that containers are stateless. So, yes if you want to run a DB in a container youd have to mount an external drive somewhere. There merits and risks of that are another discussion, but as he states its generally frowned upon to containerize a DB. (not completely sure why, some argument about stability of container and corruption of data) I talk more about this here: https://news.ycombinator.com/item?id=12913198

Logging:

I think 12-factor style app containerized fit more smoothly into the docker compose style architecture. Accordingly if all you containers are logging to std out it, its all conveniently merged and printed out to the terminal if you run compose. Then on production, GKE handles it nicely too w/ the Logging system.

In conclusion, I think most of his problems would been avoided if he didnt skip researching Kubernetes and if he didnt make the mounting oversight. The other big oversight I think he did, at least he didn't mention, is that he has no deployment tool. I wouldn't be able to effectively deploy w/o a build tool like Jenkins. I talk about a lot of these issues and how to fix them here: https://news.ycombinator.com/item?id=12860519

Yet another guy who doesn't know how to use Docker properly and just brags about it.There are a lot of misinformation in this blog post.

The conclusion from these is not that Docker sucks, but YOU HAVE TO LEARN it. I agree that it's a very steep learning curve, but after the pieces come together, Docker solves quite a lot of problem and actually very useful.