Really nice work. This doesn't seem to an official site of BMW, though!?

Porsche[1] and Renault[2] did something similar lately. It's great to see WebGL used for this in production. Honestly, I'm surprised it took so long. Visualizing cars with WebGL seems like a no brainer, especially when most current websites load dozens of images for their "360 views".

I am perplexed. Modern games are fantastically more visually appealing and contentwise impressive than this. Modern films utilize computer graphics that are far more impressive still. What is it about single model viewer that makes it headline material? It looks nice, though.

The scene looks very nice albeit I'm not sure what's so spectacular about displaying a single model? I've noticed someone saying in the thread that it's impressive because you can view it on mobile, but why wouldn't you? Modern phones are equipped with hardware better than the one we had over a decade ago, and we had 3D games back then, so displaying a single higher poly model on way better hardware doesn't really amuse me. Besides, I don't really dig the whole idea of WebGL, but the damage's done already.

How do you design a system that's hardened against social engineering but not hardened against innocent mistakes, like losing your password? It seems like the easiest way to access public systems like this is through social engineering techniques around password recovery or phishing.

Of course there are well-known answers that are used to mitigate these problems somewhat, TFA solutions, login images, etc. But I still feel as if social engineering attacks hit a really vulnerable weak spot in many systems.

(On a mostly unrelated note, can we get rid of security questions forever? I've taken to just giving nonsense answers for them and storing my answers somewhere secure. I sure don't want my passwords being reset because somebody knows my mom's maiden name...)

Has there been any confirmation that this account even actually belonged to the CIA director? If yes, has there been any evidence that there was actually anything sensitive on the account? (I seriously doubt the latter)

If there was nothing on the account how is this different from any of the other tens of thousands of aols that have been hijacked since the 90s?

I think leveraging virtual spaces to provide greater access mental health support is a great step forward to improve outcomes for troubled adolescents. This definitely opens interesting avenues for future research.

Interesting to see how instant-gratification-type habits like endlessly browsing your news feed are the new low-grade drug habits. I'm just waiting for the study that finally proves that surfing twitter for more than 60 minutes causes a 300% higher likelihood that you'll visit a porn site in the next five.

I try and not comment about this too much, but the text on the Docker site is stupidly hard to read. The font color, #7A8491, has a contrast ratio of 3.8/1 (black on white has a ratio of 21/1), which is barely above the w3 accessibility standards for _large_ text (18 point, or 14 point bold - higher for the thin stroke text the Docker page uses (Helvetica Neue Thin)).

Fix this, please, Docker. A few more points towards black isn't going to destroy the look and feel of your page.

I did a quick smoke test to see if Tutum passes mustard. My smoke test for this kind of tool is if they have a solution for deploying mongodb as a production level service with sharding or at the very least replica sets. Like so many other lets do the easy part and stop there companies, they have a template for starting a single development mongodb node that would be easy enough to do myself. I want a tool that has a repository of templates for making formations of very hard things easy. Openshift is at least working on it: https://github.com/openshift/mongodb Their replica set version is not able to persist data permanently until Kubernetes figures out how to attach separate persistent volumes to pods in the same service. Unfortunately, Amazon is again the only game in town that does exactly what I want: https://aws.amazon.com/blogs/aws/mongodb-on-the-aws-cloud-ne....

This seems to be a good move towards a more stable revenue generator for Docker, the company, but I'm more interested in what the long-term Docker ecosystem implications are.

I think Docker is just passing through the early adopter status in terms of actual production usage (it's much more mature in its lifecycle for dev), and having one of many cloud Docker providers owned by Docker might have a chilling effect on other 'container in the cloud' providers using Docker as their primary container format/platform.

I tried Tutum a couple months ago, the onboarding experience was awesome, the free image builder was super fast, metrics of my processes everywhere I loved it... the struggle started until I deployed a real app with little workload: After a couple of hours Metrics didn't worked at all, process got stalled and the whole UI became useless because I had zero visibility of what was going on.

I switched to Heroku only to realize that I had the same problem there too, obviously it was an issue in my app but at least Heroku have me an specific R14 error code and description of what was happening and finally knew what I was dealing with. For the next 48h that I was debugging the memory leak I had my dynos switched to 1X to get even more resource metrics, once the issue was solved I switched my dynos back to hobby.

I'm considering going back to Tutum now that I have deferpanic installed and configured in my app and my Heroku bills are around 100 USD monthly(20USD SSL endpoint x 3 + 7USD hobby dynos x 3 + 22.50USD Compose RethinkDB), but I was shocked to realize how much value a mature PaaS can deliver for clients even for a hobby-ish app like mine.

I'd like to know what Tatum offers in comparison to fabric8.io. It seems the "video intro" and "take it for a spin" links are the same, and not a video introduction. That's disappointing. Maybe someone in charge of the page can fix it please?

Searching for "Tatum video introduction" on a search engine only returns results about a certain movie star, which is not terribly helpful.

I've used Tutum for a little while now and I love it. I'm just wondering about what Dockers plans are for when Tutum leaves beta. It would be nice if it stayed free. The potential pricing seems decent enough at $7/node/month, but it could be better.

edit:

updated tentative pricing image url as it looks like someone has deleted it off the tutum slack team site.

I somehow felt like the docker team was closer with the Rancher team, so I thought docker might acquire Rancher at some point. I think this is a move to produce revenue in the future while Rancher is yet another open source project to monetize.

The proposals do not address the elephant in the room and the very reason Safe Harbor collapsed: The NSA and the ability of the US government to override any treaty to access any data using secret warrants.

It is that which killed Safe Harbor, and none of the proposals at the end of the article would be immune to that weakness again.

It would remain the case that the proposals made would not be in line with the clear ruling that the European court gave if the US government can continue to override international treaties and their own courts.

He makes the issue more complex than necessary for the benefit of his employer. There is no reason why private information needs to move across borders without the express consent of the individual involved. At that point the individual agrees to be bound by the rules of the country where the data is going or no transaction is done.

Let each country have it's own set of rules and have all countries respect those rules for data located in the hosting country.

The idea that each country must be exactly the same and data is by default available for transmission across borders is only to the benefit of multinational companies.

Microsoft are at the centre of another case which will really decide how badly EU-US trade is affected:

> Microsoft stands in contempt of court right now for refusing to hand over to US authorities, emails held in its Irish data centre. This case will surely go to the Supreme Court and will be an extremely important determination for the cloud business, and any company or individual using data centre storage. If Microsoft loses, US multinationals will be left scrambling to somehow, legally firewall off their EU-based data centres from US government reach.

At the moment, data can be held within the EU by US companies and it's all ok. If Microsoft is forced to hand over emails stored within the EU to the US government, then all bets are off.

In that future, it may not even be enough to have an EU-based subsidiary of a US company hold data within the EU, since it'll have been shown that the U.S. government can coerce them.

And we like to talk about large companies like Microsoft, Apple, Facebook, Google etc. But they can throw money, lawyers and engineers at this problem. But the thousands of US-based SaaS apps do not have that luxury. Likewise, there are thousands of EU-based small SaaS app that will have everything from their hosting stack, to their bug tracker, to their communications tools taken off them.

The ECJ was simply responding to a fairly obvious and fundamental problem: your private data IS NOT SAFE in the US. The US government doesn't care and has no intention of changing this, so expect the ECJ's ruling to stand for a long time.

To me the key idea from Brad Smith's post, which I don't neccessarily agree with, was this:

Third, there should be an exception to this approach for citizens who move physicallyacross the Atlantic. For example, the U.S. government should be permitted to turnsolely to its own courts under U.S. law to obtain data about EU citizens that move tothe United States...

What he really arguing is that EU should not invalidate the Safe Harbor in that it breaks the Internet and Microsoft will provide its customers data access for U.S. and EU governments under "in the most limited circumstances". In that sense, it's not something out of ordinary than what typical Microsoft's position is in this issue. They can certainly do better than that, I.E. throwing away the server side encryption key like Apple does for iOS devices so that they don't have the technical capability to give out user data even if summoned to.

> Government officials in Washington and Brussels will need to act quickly, and we should all hope that Congress will enact promptly the Judicial Redress Act, so European citizens have appropriate access to American courts.

Well, Microsoft is wrong here to believe that the Judicial Redress Act [1] will be sufficient. The CJEU has required "essentially equivalent" privacy protections for EU citizens as they get in the EU.

The US Privacy Act does not give them that, so this Judicial Redress Act is a hit and a miss.

The US needs to pass a much stronger privacy law that is "at least" as good as the one in the EU, if it wants its companies to continue to get EU citizen data (and I assume it does). It can start by finally reforming the ECPA for the 21st century.

I'm sure that in legal circles, there are a whole host of similar "law hacking" examples. This seems like a particular ingenious approach. I'd be interested to learn about other circumstances where laws are creatively misused to achieve noble ends. (The examples where the law is misused for nefarious ends are too numerous to mention).

Just for the record, the Netherlands has been proudly advertising their tax deals for years.

See this presentation[1], slide 12. Right from the horses mouth:

Reason 7 [to have a holding company in Holland]: Fiscal climate: Very competitive tax climate from its far-reaching tax treaty network to the possibility to conclude socalled[sic] advance tax rulings.

Utterly blatant. And notice the logo of Starbucks next to it. The Dutch government advertises that Starbucks pays practically nothing in tax, in order to undercut other EU countries.

These tax deals usually take the form of a fixed tax guarantee: the company agrees to place their holding company in the Netherlands and pay X euros in tax for the next N years (2 to 5), regardless of their actual revenue or profit. For the Dutch government this is just free tax revenue and if they don't make a sweetheart deal with the multinational the holding company would end up in Luxembourg or Ireland instead. This way the multinational can make the countries fight for the most preposterously low offer.

Good. I don't understand why companies need to be bribed to come to markets. Want to sell your goods, great, pay the same taxes every other company pays. This is nothing more than government subsidies for specific private companies.

In 100 years we're going to look back and realize what a waste of time and effort corporate income taxes are. And what amount of time was wasted trying to levy and avoid them.

In the end, you can recover the same money by taxing dividends and income accordingly, and it's far more difficult to hide those (a person's residency is less ambiguous than a corporation; and while you can try to play games, with proper enforcement you will end up in jail for doing it).

Of course any politician will get castigated for suggesting removing the tax entirely, but that's just politics and not sound economic policy.

There are a lot of stories lately that throw the idea of "union" in "European Union" into question. The tax discrepancies are just part of the picture; the varied immigration policies are also getting a lot of attention. Hopefully the EU can find a way to see itself as a truly unified whole, which would be of great benefit to the world.

Up to 30 million repayments since 2012? Is that it? these companies have billions of revenue and yet that's all they have to pay? How about fining them and the countries for doing so? This just seems wholly unfair.

This may be a small step in getting global players to play fairer, but from what I can tell this is still cheating the system and depriving countries and their citizens of badly needed tax income. All while competing unfairly with smaller non-global companies.

I don't get why the EU still hasn't managed to get this under control.

I'm curious: how bad is it to leave a new job soon after starting (because you realize the product isn't as great as what you thought it would be/too much competition in space)? When you are a high performer, you really don't want to waste everyone's time if you end up in this bad situation. What is the professional thing to do?

Asking because the article mentioned some people staying only for a few weeks. In a small startup, not seeing traction would be reasonable cause for departure since your paycheck directly depends on it. In a large company, things are not always interrelated in the short term. So there is more time to try out things. I guess the question is: how long should one try to make a new job work?

>>Just last week, Yahoo lost two senior women execs development head Jackie Reses to Square and marketing partnerships head Lisa Licht.

Before that, another exec once close to Mayer CMO Kathy Savitt left for a job at a Hollywood entertainment company, although sources said that was due in part to increased estrangement between her and Mayer.

She has been a CEO of a publicly traded company for 3 years. One that was "seemingly" in trouble. 3 years is a long time for a company that was founded 3 years ago. Yahoo is 21 years old and still alive (and somewhat doing well).

I do not envy being tasked with turning around Yahoo. If I were her I'd more radically reinvent the company and buy more startups, maybe even pivot the whole thing. But it's a public company top-heavy with MBAs so you're really quite limited... her job is probably like driving a car with a dead elephant strapped to the trunk and a flat tire.

I like the idea and the use of AGPL. This license lets companies release useful software and still keep the door open for selling alternative licenses, if that is what Metabase does. I have been thinking of writing something similar for natural language access (similar to what I used in my first Java AI book fifteen years ago) to relational and RDF/OWL data stores, and Metabase provides a nice example of how to distribute a general purpose standalone query tool.

Also +1 for being a Clojure app. I am going on vacation tomorrow and am loading the code on my tiny travel laptop for reading.

I always have thought that companies were bought for a price of it's market value (stock price * number of stocks), or if someone did not wanted pay all in cash they would've tried to compensate in other ways until market value is reached.

But here WD bought SD for ~85-86$ per share, when it was worth ~75$ per share. It's at least 13% more.

Does it simply mean that WD hopes that SD will rise in value rapidly? Or I imagined company buying evaluation wrong?

This is an unfortunate development. It's tough to fight against the tide of developers that simply don't care or honestly believe it is a good thing, but it's a fight that is worth it for better security and easier maintenance of GNU/Linux systems, which directly benefits users. Thankfully, Debian hasn't given up, nor has GuixSD that I help maintain.

I asked Tom Callaway from Red Hat about it and he said "I'm not a fan, I think its a poor decision, but I also appreciate that I might be in the minority these days." [0]

Hopefully, once enough people have been burned by the apparent convenience of bundling, we'll see the tide change. Maybe after Dockerization has run its course.

I have worked on several large applications where libraries were customized and bundled in. We would have been better off in the long term implementing the small delta we needed from the library in our application. In every case I saw, it was just an example of lazy engineering that led us to bundling.

Policies that are out of line with reality are bad policy: the war on drugs does not fix drug abuse, vagrancy laws do not fix poverty, and the war on bundling merely ensures that bundled software goes unreported.

The metaphor doesn't pan out. The third is canonizing a technical error.

Why Docker, why not RPMs they are not that hard to build. They have years design an work behind them. I hate bloat. 20 years ago I could get the same work done today in 1000th the memory and disk space.

I've used (servers and laptops and desktops) Fedora and Ubuntu for many years. Since the advent of package management, Fedora has been significantly more "just works" when it comes to anything slightly professional or complicated (of course Ubuntu has had the edge on personal multimedia), and I'll bet this practice of discouraging bundling is a huge part of that. On the other hand, Fedora usage is falling, proportionally, right? I'm not sure, but it seems like it. Anyway: tough call.

The only "online piracy" I see here is when Elsevier demands US$30 from you to get a copy of a paper written by scientists paid for by your tax dollars, who paid Elsevier page fees to publish it. Elsevier and similar companies are the thieves here, and they have a hell of a lot of nerve to be accusing scientists of "stealing" and "piracy" for working to create the very knowledge Elsevier shamelessly exploits. (Even Elsevier's very name is a theft: they are attempting to free-ride on the goodwill of the Elzevir family of Renaissance publishers, who have no connection with them.)

Do they have the law on their side? Yeah. So did the Pope when he sentenced Galileo to life in prison for promoting heliocentrism. That doesn't mean they're in the right; that means the law is in the wrong.

I've had over 100% luck emailing papers' authors directly asking for a copy of a particular paper I've been interested in reading. I typically get a PDF emailed back to me.

I say "over 100%" because several times I've had hard copies sent for whatever reason with hand-written letters thanking me for expressing interest in their research and letting me know they'd be happy to answer any questions, etc.

I've generally found that some researchers, especially in relatively arcane areas are very pleased to find people who are genuinely interested in their work.

I only appeal to authors directly if I'm unable to access a paper online through my library's JSTOR access which is fairly extensive.

Apologies in advance, but when I saw this link, I expected to find an article with a non-nondescript phrase ("Blue Iguana" or some such) that would tip people off to meet in an unlisted IRC room or some such.

I realize not everyone is on top of internet culture and slang, but reading "#icanhazpdf" is a "secret codeword" makes me wonder if the whole piece is tongue-in-cheek ("I am shocked, absolutely shocked to find gambling in here!") or if the author really has discovered the internet for the first time.

Living in developing country you learn to ignore copyright or you never learn anything. I don't know if it was invented as a way for developed countries to keep competive advantage, but it sure would work that way if people actually obeyed.

#icanhazdf, Sci-Hub, libgen, etc. are all symptoms of the disease. Science is in something like turmoil as it adjusts to the internet. Of course, the rest of the world has already adjusted to the internet - science hasn't because publishers have used their monopoly over our scientific knowledgebase to systematically prevent progress.

Some food for thought: science is mostly funded by public money. A small portion of that money goes to paying scientists - the rest goes on products and services bought in the process of research. Some of these are necessary. But publishing takes a large chunk of that funding stream - they charge us thousands of dollars to put articles we write on their website. In almost all cases they add no value at all. Then, they charge us, and anybody else, to read what we wrote.

But maybe it just costs that much? There are two issues here: firstly, for-profit academic publishers have some of the highest profit margins of any large business (35-40%). Secondly, they are charging thousands of dollars for something that with modern technology should be nearly free. They are technically incompetent to the extreme - not capable of running an internet company that really serves the needs of science or scientists.

They systematically take money that was intended to pay for science, and they do it by a mixture of exploiting their historical position as knowledge curators and abusing intellectual property law. They also work very hard to keep the system working how it is (why wouldn't they? $_$) - by political pressure, by exploitative relationships with educations institutions, by FUD, and by engineering the incentive structure of professional science by aggressively promoting 'glamour' and 'impact' publications as a measure of success.

The biggest publishers are holding science back, preventing progress to maximise their profit. We need to cut them out, and cut them down. Take back our knowledge and rebuild the incentives and mechanisms of science without them being involved.

I'm in the lucky position to have access to most publications legally. But I cannot imagine what to do if our library wouldn't have subscriptions. The prices most publishers are demanding are insanely high and simply not financable if you need just a dozen papers or so.

Especially considering that the research and the the writing is done by scientists, the review is done by other scientists. For free. The writers even pay a lot of money to get published. So I wonder what justifies these price tags for offering a PDF for download.

Don't get me wrong - I can still see the role of a publisher in the scientific world. But perhaps the monetarization should be overworked... As the article said: let's see how this whole publishing world will change. Open Access and comparable models are becoming more and more popular.

Fundamentally, we're talking about the dissemination of knowledge. Yes, it is copyright infringement, but calling this "piracy" immediately associates this act with both theft and brutal disregard for the law.[0] That is not what is happening here.

With that said---I'm a Nature subscriber, and I'm pleased to see the emphasis on "Open Access" by many scientists and organizations. Hopefully this trend will continue, and silly issues like individuals requesting PDFs from fellow scientists won't be termed "piracy".

It puzzles me that the most significant problem with open access receives little mention, in discussions on HN: it changes the incentives structure of publication, from one where the publisher has to please the ones buying the journal to one where the it has to please the people paying to submit articles.

This is what makes the situation profoundly more complex compared to other application of copyright, say in the software industry, where clearly switching to an open source model doesn't change the incentives i.e. who assesses the quality of software.

The long term effects on academia of switching to a model where the taxpayer gives money to scientists to pay for open access submssion of their research are hard to evaluate, and do no get enough though (imho).

That clearly doesn't mean that there aren't bad journals that are not OA, nor that for the benefit of the public some sort of arrangement shouldn't be found for older research: I'm a big believer in "faster decaying" copyright in general, and mandating that all publications describing research that is publicly funded become OA after, say, 30 years, would help significantly.

I've never published a paper, and can't understand why we need actors like Elsevier and other paywalls for scientific research publication. What motivates scientists to use a publisher's services? Can't these be replicated by setting up a government publication house?

I upload my papers through Researchgate. I know that it may not be legal to do so, but it is password protected, and hasn't been challenged by too many publishers. Sharing this way makes great sense for the author. You want people to read your paper, and it gives a way to do so. You must create an account, but many papers that would otherwise be blocked can be found this way.

The other trick I recommend people try if they frequently have trouble finding papers is to try EndNote. It is a little expensive, but I found it to be great at finding papers that I couldn't get through the official sources with my school's access.

I don't see any problem with having Elsevier manage publications that prevent people from copying their content. Just as long as that content is also available elsewhere for free, if it's publicly funded research.

I assume the problem is that Elsevier doesn't much like when articles are also made available outside their publications? Well, then either starve them of all publicly funded content or just have them accept that all the publicly funded content will always be available outside their publications. It's as simple as that.

A proposal requiring that publicly funded research is publicly available would be how hard to pass in as law? Why aren't such proposals made? If they are, what has stopped it from already being law?

> The original tweet is deleted, so there's no public record of the paper changing hands.

Why is it assumed that there is no public record of the paper changing hands? They tweet the request publicly, so it stands to reason that someone is paying attention and archiving. I suppose the key word here is "public", but I'm not sure why that matters if the goal is covering up illegal activity.

I was expecting the secret codeword to be 'preprint'. When I was in academia not too long ago, I would often ask authors for the preprint of this or that paper, and they'd usually send it back promptly.

I also don't like it but the paper needs be printed and reviewed. This is not free. Perhaps we should agree that the publishing group pays for the entire cost of the article so that it can be free after the process of publishing it? Or boycot paywalled publishers, maybe go for PLOS? If you have ever complained about paywalls, don't ever publish in a paywalled journal yourself.

I'm all for free papers by the way, nothing is more annoying that researching things and hitting paywalls but someone has got to pay the people doing the publishing work.

Also: If I order a paper from our library or I download it myself, it often comes with an on the fly generated cover page with my IP address on it. One can remove that, certainly but there may be other mechanisms to tag papers. Amazon reportedly investigated (and implemented?) putting specific, unique errors in DRM free ebook copies to identify sources of piracy. So I wouldn't advice you to just send the PDF around unless you are the author maybe and have a PDF that did not go through the publishing process.

>Also the code is faster, likely because it doesn't have to load another module at runtime.

Sure, but common sense would guide programmers about such tradeoffs. The extra time spent loading an additional library dependency would be amortized over the total execution time of the program -- IF -- the program makes repeated use of the library.

If a js file only has a single getElementById() call, substituting that with a JQuery library to get the "$" syntax would be overkill.[1] Howevever, if the javascript has many complicated DOM selects and bunch of animations, the extra JQuery load time can be justified.

One of the reasons to use a library the author didn't touch on was "insurance against unknown edge cases." For example, I could attempt to write 50 lines of code to uppercase a lowercase Unicode string. However, my attempt would have bugs in it. Instead, it would be more prudent to use the ICU library. It's a hassle to add that dependency and it's many thousands of lines more than my "simple" program but the ICU developers covered more edge cases than I ever thought of.

This reminds me of the famous quote by Joe Armstrong -- You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.

Most modern software has incredibly complex dependency chains and that's what makes it fragile and unpredictable most of the time. If we focus on making the languages, runtimes and core libraries flexible enough that we don't need to assemble code from dozens of hobbyist GitHub projects to put up an app with a reasonably modern UI, we would make a huge step forward.

Many times the "right" thing is a balance between using a library/module as a black box and implementing something yourself. It usually involves a lot of considerations.

* License is it compatible with the rest of your code?

* Relative size of module vs what you need from it. The size of the module introduces a non-trivial maintenance burden.

* Difficulty of just doing it yourself. Is the thing you need to do non-trival? like say encryption? or Distributed Consensus?

* Is the module compatible with the internals of your system? Will it require significant changes to how your code or application works?

* Platform support? Will the module work on all the platforms your code/application will run on?

All of these things influence the decision. A knee jerk response of "Just use X" may be right but you'll find yourself in the position of not knowing why it's right and thus unable to adjust if it ever stops being right.

I often find myself feeling the same way about using gems in ruby. It's amazing how quickly I can get something extremely functional built out by adding a handful of gems, but after a few months I look back and realize how much extra (invisible) code I've added when I really just needed a tiny sliver of the functionality provided.

One is the role of technology in sports which is really interesting. In other sports there is a lot of debate over what technology is allowed and what isn't. I would be really interested to see some things like how far someone could hit a golf ball if there were no restrictions on the club or ball or how fast a person could swim if there were no restrictions on the swimsuits.

The second thing, however, is that the text of the story and the video have different focuses. The text focuses on telling the story of how this is a grassroots movement by some athletes. The video, however, seems to have a more pronounced undercurrent that this might really be about one company, BalancePlus, trying to put pressure against an upstart competitor, icePad, who is eating into their market share. I think it is really interesting that the text doesn't emphasize this as much as the video does.

This seems a bit like wooden tennis rackets a few decades ago, though with a less mainstream sport the rules body may have more pull than manufacturers. But the bottom line is that sports change over time or die. Changes simply mean that a different set of skills come to the fore and others become less valuable.

Personally, I actually prefer the 16:9 aspect ratio for any screen that has sufficient raw vertical resolution to not hinder productivity.

I feel that once a device gets past the point where the lack of vertical resolution limits productivity, the marginal utility offered by even more vertical resolution becomes rather insignificant, to the point where I'd probably benefit more from having more horizontal resolution for things like snapping windows side-by-side and not having black bars on 16:9 video.

I have a Surface Pro 3, and owned a Pro 1 (1st gen), I have to applaud Msoft for iterating quickly on their hardware line. Its been ~3 years and the (personal experience) general consumer approval of the Surface line has been gradually changing. It went from an iPad competitor, which was a terrible comparison, to a generating its own category, a tablet that can replace your laptop.

I typically skip a generation to upgrade machines, but the Pro 4 (based on this review) solves all the small quips I had with mine. Its looking like I am going to upgrade to the Surface Pro with Iris graphics. Still on the fence about the Book, I don't really need the laptopness.

To me, it's almost as much of a reflection on Microsoft's OEMs as it is on Microsoft that, in 3 years, Microsoft has iterated from it's first device, which was a commercial flop, to a tablet/ultrabook that's only real criticism is the lack of a USB Type-C port.

Where would Microsoft be today if they had given up on their OEMs 5 years sooner, and gone head to head with Apple on hardware?

Strangely my workflow is in a place where most of my graphic programs also exist on Windows (Lightroom mostly, everything else is easily changeable).

It's my terminal workflow that I can only use on a Mac/Linux.Zsh, Vim, Tmux, LateX, Python and mostly a package manager (Apt on Linux, Homebrew on the Mac). Most of which I can just configure in a new system pulling the config files from my Github repository in less than 15 minutes after a fresh install.

That's what's mainly holding me from going back to windows (and the Application update process which is still awful as I can see from my VMware installation of Windows 10... seriously, updating the various components of Visual Studio in a semi manual way is just ridiculous), is there any good alternative for the Shell in windows that doesn't involve considerable tinking?

>"If an infinite set can be put into one-to-one correspondence with the natural numbers (N) it is called a countable set. Otherwise it is uncountable."[1]

This paradox hinges on the strange notion of cardinality of infinite sets. Specifically, the set of all even integers, the set of all odd integers, and the set of all integers(!) have the same cardinality, and therefore the same "size".

I like this paradox for it's simplicity, but there's just one aspect that cracked me up.

>Suppose the hotel is next to an ocean, and an infinite number of aircraft carriers arrive, each bearing an infinite number of coaches, each with an infinite number of passengers.

hahaha.

How would we extend that?

suppose we have an infinite number of passengers, carried by an infinite number of coaches, transported by an infinite number of aircraft carriers, shoved in by an infinite number of tsunamis, which occur on an infinite number of continents, on an infinite number of Dyson spheres...

Yeah, I freeze and thaw cells constantly without much trouble--just plop those suckers in a solution of 10% DMSO and a serum-containing isotonic liquid then throw into the freeze-assist device, then put that into the -80C freezer and transfer it to the liquid nitrogen freezer the next morning. Everyone does it this way, and has for years. I've always wondered if the freezing and thawing process subtly changes cell functioning (beyond the time given to cells to rest after being thawed), but I'm guessing if it were the case someone would have noticed it by now.

The bugger that haunts human cryonics is that thawing is never perfect because the cryoprotectants used to prevent ice-crystals within the cells are usually toxic. If you freeze cells that are measured to be 100% viable/alive at the time (very common) then thaw them using best practices, you're going to have some cell death-- maybe 1-5% if you're fast (less time spent in toxic cryoprotectant) and lucky. If you're unlucky or slow, you can look at 25-45% of your originally healthy cells being dead upon completion of the thawing process. The remaining cells are usually extremely discombobulated, and can take days to return to their baseline. This is completely fine if you're tooling around in a research lab or industrial lab, but even a 1% loss is probably too much for a human brain to bear and remain the same as before.

I suppose that if you work under the assumption that the future technology cryonics relies on for thawing will exist, cell loss during thaw will not be a problem; I find this possibility to be fairly likely over a long time span. Alternatively, you could assume that there will be advanced ways of restoring brain function or generating fresh neurons after systemic damage-- quite a stretch if you ask me, but it's conceivable. I think that ultimately the goals of cryonics will be scientifically realizable for those who were most recently preserved.

The perspective of the BPF folk is perhaps a useful calibration point for those coming into this as a new topic; they are critical of cryonics for some detailed technical reasons, with plenty of room for debate, think that plastination should be developed as an alternative technology, but are firm supporters of the concepts of brain preservation and the evidence to date for fine structure preservation. For example, see this response to an earlier and very shoddy article critiquing cryonics at the Technology Review:

1) What's the diff logic? At first glance, it looks like JSON is reformatted (maybe canonicalized in some way) and then a line-by-line diff is applied. Is there more to it? Since the tool seems JSON-aware, I was surprised to see an added trailing comma up as a difference.

2) Do you have plans to expand the kind of HTTP requests users can make? It would be nice to use different verbs, headers, and request bodies. Runscope has a similar tool[0] built in that I believe (haven't tried it yet) allows a bit more flexibility, but it would be nice to have a standalone tool available.

[Error] TypeError: undefined is not a function (evaluating 'Array.from(e)')_toConsumableArray2 (app.min.js.pagespeed.ce.ozGaCBt6Kj.js, line 1)s (app.min.js.pagespeed.ce.ozGaCBt6Kj.js, line 1)f (app.min.js.pagespeed.ce.ozGaCBt6Kj.js, line 1)onload (app.min.js.pagespeed.ce.ozGaCBt6Kj.js, line 1)

Just the other day I needed something similar and was disappointed that I could find it.

I wanted to discuss something with a remote colleague and to illustrate it I wanted a visual diff of two files. I was hoping there was a nice little web app offering this but I was forced to screenshare (I could have terminal-shared but it was more hassle).

It's pretty easy to write a recursive diff function that compares JSON strings, in order to avoid the JSON -> diff by line hack that you're doing. But it's a clever hack that easily translates to the command-line.

"The Race for a New Game Machine"[1] is a pretty good read on the development of the Cell and Xenon[2] processors. Giving up the out-of-order seems like a really bad decision and the internal fighting at IBM, if accurate, is really sad.

A side question: What in the POWER architecture makes it hard to implement? I was told the addressing modes are complicated enough that it will always be slower and harder to create than other processors. I'm wondering if this is urban myth or has some basis in reality?

While this may be simpler than REST and cleaner than SOAP, I find REST to be far more elegant.

>An HTTP 200 is returned on successful completion, and HTTP 500 is returned in the case of an error (i.e. an exception). Note that exceptions are intended to represent unexpected failures, not application-specific errors. No other HTTP status codes are supported.

Now you are just using HTTP as your transport layer, you can very well make it customised to your particular needs rather than defining a spec. This might result in easier client code, but why not use Thrift if that is what you need.

I'd be very interested to see RethinkDB analyzed, particularly with the 2.1 release promising high availability through Raft. RethinkDB and Aphyr have talked about doing Jepsen tests for it, but I'm not sure where that's landed (https://github.com/rethinkdb/rethinkdb/issues/1493).

I've been working on learning React, and finding it particularly difficult. It appears there are a large number of things I need to have in place and pieces of knowledge I need before I can use it. These include some kind of Js compilation step like Webpack or Browserfy, something like Babel, a knowledge of how to use ES6, an understanding of React, and an understanding of how to use React-Router.

Although I've done some Javascript on the front end, I haven't done the other things I mentioned. The tutorials all seem to assume I know how to do everything but one little piece of detail, and I'm finding it difficult to bite on the elephant. It's hard to tell where to start on learning this stuff, and how much I need to learn before I can use it.

Any suggestions for what resources and approach to use to learn react? My goal eventually is an app that runs in 3 versions: web, iOS, android. I don't intend to use javascript on the server.

A number of these guidelines reinforce my biggest complaint with React: it is architecturally difficult to avoid monolithic view files.

In a traditional web app, we have 4 layers: client views, client app, server app, database. React, described as a strict view layer, in reality is being used as much more. At this point, it is not just consuming the client app, but is also taking nibbles at the server app as well.

To each their own of course, but i would ask people to hesitate about these decisions. The architectural issues with monolithic views is well known, and just because we have a shiny new tool does not mean we should throw that understanding by the wayside.

It's probably a good time to start looking at using the Fetch API [1] for making AJAX requests instead of using jQuery or Backbone (or even XMLHttpRequest). Support seems to be growing quickly and Github's polyfill [2] can help cover the gaps.

I use Backbone models for ajax since it makes decisions such as PUT vs POST and model.save() looks cleaner than $.ajax. Also, Backbone collections provide a declarative way to handle sorting and duplicate models. But these models are internal to the Store and not exposed to the views. I'm still a React newbie. Is this a valid reason to continue using Backbone?

Style question re dom manipulation: third party embeds, such as twitter, instagram often come as specially classed blockquote elements that are swapped with iframes by a jquery plugin. What is the best way to integrate this with react?

I would love to read more about styling inline and completely remove external CSS files.

Are there any CSS-frameworks that have been converted to JS but not are not their own components yet? It's easy to find React-Bootstrap but that comes with ready made components, I am looking for styling that's purely in JS so I can make my own components.

Also would a route-component be considered logic or presentation, or maybe it is its own thing and they forgot to mention it?

"Fit them all on the same line if you can." (HTML Properties) -- OK, easy enough, my editor has no limit on its line length. I can always fit them on one line. Or did they expect it to fit on one line visually. If so, at what width to they expect my editor window to be?

In all seriousness, though, I appreciate the brevity of this guide. It can be quickly read and understood, and is not the fully-fledged book I've seen from other places.

It makes me feel so sad that the "state of the art" in front-end development is apparently rerunning your render code on every little update. Yes, I know there are shortcuts to making this more efficient, but in essence the technique remains inelegant in the sense that it does not extend well to other parts of the code (that runs computations that might also partially, and not fully, change on an update).

Still, nearly no progress in fast decimals, which are extremely important in financial applications.

I'd even say that the only place where floating point is necessary is in simulations (physics, 3D, analog signals all of it should be properly done with GPUs.) Everything else (2D layouts, finance, data processing) is better served with either rationals or decimals.

We should remove floating point support from general-purpose CPUs and leave it to GPUs, where it belongs.

I just don't think many people realize just how much legal corporate and government "business" is created by the insane war on drugs, and I am convinced this is the reason why it continues.

Like I mentioned before, I just spent 6 months in the deep bowels of the criminal justice system for drug possession here in Florida, and those observations have shaped this view.

The cops, the judges, the lawyers on both sides (who eventually become the politicians that make the laws), the clerks, the guards (do they hate being called that!), the jail/prison administrators...they ALL are making pretty awesome livings from the war on drugs, and have zero repeat ZERO incentive to change anything.

The problem is not a legal one, it's economic.

The public is fed the propaganda of wrecked lives and violence to keep the status quo...until the population somehow wakes up and sees how the CJS is totally broken as perhaps even more corrupt then the drug game, I doubt anything can change.

There's only so much profit in the drug business because of the illegality of many drugs. Changing that would kill the profits of many very rich and powerful criminals and cartels. So it's very understandable that those people do everything they can to prevent their business model from collapsing.

The usual way of doing it is simply lots of lobbying: Paying "well meaning" people, journalists and politicians to stay on the track of keeping most drugs illegal, so that the drug lords and cartels can continue to earn money.

So does really anybody wonder why the (obviously totally pointless) "war on drugs" is still waged and will probably waged for quite a while?

Besides all of the stated reasons to legalize/decriminalize, I think just as important is personal liberty.

I should be free to experiment with my own consciousness (so long as it does not impede on the rights of others) - how much more personal can you get? For the government to impede on this is unconscionable (pun intended).

I'm not too keen on Silver's analysis/conclusion that drug offenses account for such a low fraction of inmates' crimes. Drug possession renders a lot of otherwise non criminal behavior into a "violent felony", and prosecutors love tacking these extra charges on when they can.

The other day Richard Branson posted a leaked report from the UN Office on Drugs and Crime that agrees with the notion of drug use as a health issue, not a criminal one. Of course, they've turned around and said that's not their official stance. Sigh.

Is it creating violence or exacerbating violence? I'm more inclined to choose the latter. The reason why I distinguish the two is that I see the continuation and increase in drug use as something symptomatic of other issues in the big picture.

Economic issues, politics, racism.... oppression on various fronts from the system down to the family to the self.

Drugs (and alcohol) are an escape. Normal activities also provide an escape when people get obsessed with them. TV, video games, food (my escape), exercise for some addicts, sex, etc.

If you have a business disagreement you can't go to drug court to resolve it. Sometimes attempts to resolve differences go wrong. At some level the violence in the drug black market is simply because it is a black market.

Yeah, not so much. Yes, there is a severe moral problem here, but please do not make moral arguments! It's folks with moral arguments riding around on high horses that got us into this mess. Instead, argue from the standpoint of practicality (which she does).

One of the practicality arguments she does not make, which deserves mentioning, is that because the drug war is unwinnable, there are too many laws. This makes folks with the power of selective enforcement lords over the rest of us.

Have a traffic stop? Cops ask to search your car? You have a right to say "no". But if you do, be prepared to wait around until the drug dog shows up. He'll sniff around your car and "alert" the cops, even if there's no drugs present. Then, guess what? They get to tear apart your car while you watch. All because of the war on drugs.

Let's say you are a drug user. You have a joint in the ashtray. In this case, it gets even better. Then -- if I'm not mistaken -- they get to take your car! A few dollars worth of illegal pot, which might not even be yours, and you could lose tens of thousands of dollars worth of car.

It's not that this is morally outrageous. It certainly is. It's that a system of justice cannot maintain the consent of the governed when it turns LE officers into something approaching highway bandits. Selective enforcement of drug laws -- both by cops and prosecutors -- distorts the legal system so much as to make it unworkable. Sure, it's bad, but the bigger point is that it cannot continue working in this fashion. Something's gotta give.

I liked the article. It's good to see public discourse slowly become much more reasonable about drug addiction and its consequences. One caution, though: in my opinion what we need to do is still stay tough on violent, hardened criminals while being more pragmatic about drug crimes. Otherwise we'll end up being slandered as soft-headed and irrational.

Violent crime in America rose dramatically as a result of worsening racial ties after the death of Martin Luther King, a lack of employment and investment in local communities, and an increase in use of heroin and (later) easier to produce drugs like crack.

Increase in the number of robbery and petty crime identically track both the increase in use of and violence associated with drug use. But the war on drugs' main influence on this violence has been to keep the pressure on the drug dealers, increasing risk to sell the product and making it less available, thus driving up prices, increasing competition, and therefore promoting violence between drug dealers, and by drug users in order to afford what they're addicted to.

And all of this leads to increased incarceration, not just from drug charges, but from the increased violence associated with the drug trade, gang warfare, and an unlawful under-society where people do whatever they can to get by.

A flow chart would make it a bit easier to grok, but basically the drug war throws fuel on a fire that was only simmering before.

I have a new law to go along with Conway's: Organizations or individuals that produce content... are constrained to distribute said content in formats that mirror the tools used to create it.

For example, a person using the Pages application on a Mac might be inclined to distribute their writings via PDF using Mac-typical fonts utilizing Apple-inspired layout and formatting options. You know, instead of just making a web page or using a web publishing tool.

Despite the annoying format I did read the PDF and while I like the sentiment I can't help but think it's a lot of wishful thinking. People have been trying to make visual programming tools (i.e. tools with real-time positive and negative brain feedback loops) for a very long time now and they always come up short.

Viewing the labor of programming in real-time makes sense though and I think we can get there for a lot of use cases. Taking advantage of the web and interpreted languages (e.g. Python) or instant-compiling languages (e.g. Go) is probably how we'll do it.

The opposite of progress in this area would be utilizing extremely verbose languages like Java or C# or languages that require a lot of preprocessing and/or compiling and/or complicated build and deployment processes. Java has got to be the worst here with slow startup times, complicated and (often extremely time-consuming) deployment processes... And that's just for the IDE! Haha