I'm a huge believer in colocation/on-prem in the post-Kubernetes era. I manage technical operations at a SaaS company and we migrated out of public cloud and into our own private, dedicated gear almost two years ago. Kubernetes and--especially--CoreOS has been a game changer for us. Our Kube environment achieves a density that simply isn't possible in a public cloud environment with individual app server instances. We're running 150+ service containers on each 12-core, 512 GB RAM server. Our Kubernetes farm--six servers configured like this--is barely at 10% capacity and I suspect that we will continue to grow on this gear for quite some time.

CoreOS, however, is the real game-changer for us. The automatic updates and ease of management is what took us from a mess of 400+ ill-maintained OpenStack instances to a beautiful environment where servers automatically update themselves and everything "just works". We've built automation around our CoreOS baremetal deployment, our Docker container building (Jenkins + custom Groovy), our monitoring (Datadog-based), and soon, our F5-based hardware load balancing. I'm being completely serious when I say that this software has made it fun to be a sysadmin again. It's disposed of the rote, shitty aspects of running infrastructure and replaced it with exciting engineering projects with high ROI, huge efficiency improvements and more satisfying work for the ops engineering team.

This is a great article, but it would be good to state up-front that the author works for a company that sells a service designed to help you go on-prem. It's not totally clear until later in the article that this is the case. It would also help put the article in context better.

I work for Pivotal, which has a slightly different horse in this race: Pivotal Cloud Foundry. It's based on the OSS Cloud Foundry, to which Pivotal is the majority donor of engineering.

Lots of customers want multi-cloud capability: they want to be able, relatively easily, to push their apps to a Cloud Foundry instance that's in a public IaaS or a private IaaS. They want to be able to choose which apps go where, or have the flexibility to keep baseload computing on-prem and spin up extra capacity in a public IaaS when necessary.

It also happens that lots of CIOs have painful lockins to commercial RDBMSes, and they don't want to repeat the experience. They want to avoid being locked into AWS, or Azure, or GCP, or vSphere, or even OpenStack.

CF is designed to achieve all of these. The official GCP writeup for Cloud Foundry[1] literally says "Developers will not be able to differentiate which infrastructure provider their applications are running in..." (can't say I completely agree, GCP's networking is pretty fast).

If I push an app to PCFDev -- a Cloud Foundry on my laptop -- it will run the same way on a Cloud Foundry running on AWS, GCP, Azure, vSphere, OpenStack and RackHD.

I agree that Kubernetes is a game changer which makes it much easier to run your own applications. If all you have is VMs then the services (RDS, EFS, etc.) offered by AWS are more effective. With a container scheduler there is less maintenance and the decision is harder.

We've been running a hybrid on-prem solution for nearly 3 years now. It's been challenging, but Kubernetes has drastically simplified it for us. It now means we can spin up a client site in 1-2 business days, provided we have a server on-site ready to go.

Quick note, they define Private Cloud as "Customers private cloud environment on a cloud provider" but that is not the definition I go by. Just a heads up while reading.

I would define Private Cloud as a non-bare metal solution on Prem in a traditional colo setting where hardware is owned by the company in question. Hybrid Cloud would be a Private Cloud and Public Cloud bridged in some way.

Can't you just, like, go dedicated first ? Or better, start dedicated and turn up cloud every day at 6pm when your traffic is 10x (really, none explained how they can scale their db that fast(because you can't), only the app-tier, which is probably badly-designed to be that slow).

I think unless there is a big turn of the tide, supporting on-prem is just trouble these days. Most large enterprise customers are willing to use SaaS already.

Forget about the huge issues in the support organization for a second: the impact on-prem has on your release cycle has consequences that are hard to fully grasp. So much for "continuous development and release" if you have to keep supporting old versions of software for a year.

Build your stack so that you can easily migrate clouds (i.e. don't use all the super high-level AWS APIs). It's a good idea in general, and it should make going on-prem doable enough if you are worried about having that option at all.

Going on premise might have some advantages but it comes with completely different problems that you didn't even thought about. Some of them:

* You thought of all the power redundancy, ideal cooling, humidity, etc... but then your office gets robbed and all your computers stolen.

* Network wiring... some people are lousy, create an entire spaghetti mess with them, used a crappier type of cable, cable crosstalk from other equipment, someone stepped over a cable and damaged it. Which one is it? good luck finding it out.

To anyone screen capturing small fonts as a demonstration, or capturing digital text especially at a small resolution, I don't believe that that is the purpose of this OCR library. (As a specialized problem, that might be easier to solve depending on the typeface.)

Magic . Read this to yourself. Read it silently Don't move your lips. Dont make a suund Listen to yourself. Listen without hearing What a wonderfully weird thing, huh? NOW MAKE THIS PART LOUD! SCREAM IT IN YOUR MIND! DROWN EVERYTHING OUT. Now, hear a whisper. A tiny whisper. New, read this next line with your best crotchety old-man voice: Hello there, sonny. Does your town have apost 0 Awesome! Who was that? Whose voice was that? It sure wasnt yours! How do you do that? How?! Must be magic.

I routinely (daily) need to OCR PDF files. The PDF files are not scans. They are PDF files created from a Word file. The text is 100% clear, the lines are 100% straight, and the type is 100% uniform.

And, yet, Microsoft and Google OCR spits out gibberish that is full of critical errors.

From a problem solving perspective, this seems like an incredibly easy problem to solve in this exact use case. That is, PDFs generated from text files. Identify a uniform font size (prevent o-to-O and o-to-0 errors), identify a font-family (serif, sans-serif, narrow to particular fonts), and OCR the damn thing. And yet, the output is useless in my field.

I guess I just still have bad memories of jQuery's old almost-like-real promises. I'd rather never have to think ever again about whether I'm dealing with a real promise or one that's going to surprise me and break at run-time because I tried to use it like a real one.

For all those claiming issues with reading text from a screen shot of this page, note that this is more an issue with the original Tesseract library, not this library (which appears to wrap Tesseract compiled through Emscripten). I remember having a similar issue when I used the original Tesseract. The quick hack I found to fix it was to rescale any small text input images 3x first before feeding it to Tesseract. I'm sure there's more intelligent solutions to mitigate that problem.

I've been using this library to read screenshots of Pokemon Go to automatically calculate Individual Values for each Pokemon[1] It's worked great on desktop, but on mobile safari where it matters most the library causes the browser to crash :(

This won't destroy your phone. Phones are extensively tested when it comes to both creating and withstanding EM interference. Best it could do - maybe - is crash one that is on the edge of its spec and you would probably have to get pretty close for that (fall off as the square (edited) of the distance).

This is why I use 'browser isolation', which is a way to separate different types of surfing activity into different buckets. Currently the best way to do this in Firefox is to create multiple profiles, or in Chrome, you can simply add a different user/persona.

Having one profile, or even an entire dedicated browser just for Twitter/FB ensures the login is not spilled over into other sites. If you're surfing the web heavily, I would recommend spawning a new private window so cookies, and other artefacts are not bleeding into your session.

It sounds like common sense, but many people have cookies and login information persisting for years at a time in their browsing sessions. The Mozilla Firefox team are planning to introduce a feature which makes compartmented surfing sessions a lot more user-friendly by separating sessions into tabs. Currently, the 'profiles' feature of Firefox is not user friendly and requires a bit of tinkering with the filesystem.

The firefox and tor devs are cooperating to upstream a tor browser feature that isolates cookie stores and similar things based on the domain shown in the URL bar[0]. Available in nightly by enabling privacy.firstparty.isolate = true in about:config.

Additionally they're also also working on a more customizable version of that called contextual identities[1], which eventually will also be manageable by extensions[2]

And of course addons that block cookies in cross-origin requests or cross origin requests in general such as matrix[3] also plug this hole.

Attaching cookies to third-party requests is the source of many issues. In a similar demonstration [0], I showed that browser-based timing attacks (which can probably be considered as wont-fix as well) can be used to extract more specific information from social networks (e.g. one's political preference based on who they're following).

Google is basically omniscient on a user-profile basis with years of search, gmail, and youtube data on users. They should just write and algorithm and let it send out job offers with no human intervention, just like search.

Very simple and cool exploit. I wouldn't be surprised if this technique is already in use on various ad platforms. A really simple pitfall I think most of us can confess to having done in the past (redirect attributes are pretty common in the wild).

This is the first I had heard of GETs to login pages executing a redirect when the user is already logged in. I wasn't aware that so many did this.

Virtually every application I have built will render a simple response saying "You are already logged in" if you GET the login URL with an active session. As I understand the exploit, if a non-image is returned, the script assumes you are not logged in.

What value is there in redirecting a GET if you're already logged in? You redirect when the login form is submitted as a POST.

Nifty, with Firefox containers each one shows the "mode" I'm in. Hackernews for default container, personal has my Google world + open source + Dropbox, work has my work's Gmail world, and shopping has my Amazon account. It's like a verification that containers work!

Well its good to see its partly wrong for me. It shows HN correctly, but also shows me logged in to Facebook and Tumblr, not correct. And not logged in to gmail, which I am. Still, its a dangerous flaw.

Hmm weird, it correctly detected everything except for the false negatives of PayPal, Tumblr, and Spotify. Taking a look at the mechanism I have no idea why this would happen, and opening the relevant links in my browser gives the favicon as it should. Weird.

This 'fingerprint' changes as you login in and log out of various services, so it's not very reliable for uniquely identifying users. Regardless, it could still be used to profile you and then target content accordingly. For example, if you're logged into Hacker News, you're probably a programmer and you're probably more interested in an ad for web hosting than wedding dresses and visa versa for Pinterest.

what is happening is not legal in the US and a large porn website was sued for doing it. they were printing hidden links on the page, then checking the color with JS to see if you had visited the destination url or not. judge didn't think it was a fair business practice. maybe these companies are not fixing this because of this legal precedent and figured no one was doing it?

What does Hacker News think about AI? Is it real this time, or are we in for another winter? I'm seeing a lot of grand claims, and it certainly seems like there are plenty of applications, but I'm still not totally convinced that it will turn the entire economy upside down.

Given the enormous amount of press, tweets, blog posts, conferences, degree programs, seminars and interviews popping up it seems like there has to be something more than just hot air here. Still, the most outrageous predictions are hinged on breakthroughs in unsupervised learning happening. Taking the pessimistic view on science, what if we don't get there?

"But it also has some downsides that were gonna have to figure out in terms of not eliminating jobs. It could increase inequality. It could suppress wages."

No doubt that advances in technology have always done away with jobs. We're almost to the point at which the biggest blue collar industry (truck drivers) is about to be wiped out by self driving trucks. What I'm concerned about is the government stifling innovation such as driverless trucks to retain those jobs, or some sort of regulation that stifles the technology's potential. What is the alternative?

Also interesting are the resolution targets, using sets of black and white bars at varying widths in both X any X orientation, to determine the spatial resolution i.e. how high frequency a component could still be imaged separately as lines, similar to TV test cards. These are discussed in various blogs [1][2] with some amazing picures. I think these were for both satellite imaging and spy-plane (U2 and SR-71 plus less exotic surveillance systems) and not just the USA. This article [3] shows some satellite test targets in the Gobi desert, presumable for Chinese (PRC) spy satellites, and also has a cool picture of the world's largest compass rose, at Edwards dry lake bed, as well as explaining the crosses from the original article, and talking about radar altimeter targets (another dry-lake bed) that are mapped to centimetre accuracy in altitude for calibrating GPS and other systems.

I find it a really interesting area of industrial/scientific archaeology, with some fascinating stories.

My dad was project photogrammetrist on Corona when he worked at Itek in the 60s. Lots of stories around focus and aiming challenges, since Corona was used to build maps of inaccessible regions of the world (e.g. Soviet Union ICMB sites). Focus targets gave high contrast, known images to detect what kind of focus problems were being encountered -- ranging from image smear from forward motion compensation failing or stretching the film; film sticking, stretching and/or lifting off of the focal plane; star camera inaccuracies; thermal distortion of camera, spacecraft, star camera, or film; etc. A few good books on Corona (https://www.cia.gov/library/publications/intelligence-histor...) and Itek (https://books.google.com/books/about/Spy_Capitalism.html) out there. Same teams worked on subsequent KH projects (Gambit, Hexagon/Big Bird/BMF), as well as Apollo and Viking camera systems.

The Corona project. A/K/A the KeyHole satellites. If my count is right, 135 satellite launches, though not all were successful.

Fun fact: this was the early 1960s. CCD technology and IP transmission bitrates were a bit primitive[1], so the film cameras would eject capsules after they'd been shot, which would re-enter the atmosphere and be recovered, most through mid-air retrieval. The project was active from 1959 to 1972.

It's pretty amazing how much classic software can actually run, and works pretty well. The Wolf3D clone is totally playable, and you can actually use LSDJ (one of my all-time favorite pieces of software), and it seems to be actually running the real LSDJ, too, which is pretty impressive, considering that it means that the site actually has an embedded gameboy emulator

The trick in the "simulator" is to drink coffee, smoke cigarette, smoke weed then take lots of acid and procrastinate until the operating system is finished. Then launch it to finish game. I got 194 #Hero

If you open the recycle bin there's a zip file that actually downloads through your real browser to your real computer. I didn't bother opening it. It has a filename not everyone will immediately know how to delete. That's really not a funny thing to put out there.

I can't help but completely disagree with both the idea and the probability that this will ever happen.

Modern law is supposed to be open, so that anyone can read it. Reencoding laws, for whatever reason, strikes me as being fundamentally undemocratic.

The argument can be made that if legal "code" is taught in schools, then it's no more undemocratic than writing down law, but the chances of that happening seem slim to none.

Another argument is that law is already written in "special" English that most people can't figure out, but I think that having it in English alone is a big step toward having it being readable by almost anyone.

It would be neat to have law written as code and to have AI be able to parse it for you in lay language -- that is, you could ask it questions or pose situations to get a legally-binding answer. But then the question for me is, why don't we just have said AI read the current legalese and parse it?

Laws are not in anything that can be called in natural language; they are in a code called "legalese".

Legalese doesn't have to be formalized into program-like code. Rather, perhaps into mathematical language, with some of the notations that go with it. It needs to use clear logic, and set theoretic descriptions and reasoning. Law is all about logic and sets: what rule applies under what conditions, and what is included and excluded and so forth.

I highly recommend listening to a recent episode[1] of Planet Money. This was by no means an accident. As a short summary of one employee's plight:

Ashley, working for them and making only $35k per year in San Francisco, was continually harassed to sign people up for accounts they didn't want. An old man comes in, pensioner, $200 in overdraft fees due to being duped into excess accounts. She dips into her own savings to get him back in the black. She reports the incident to the internal ethics line. Nothing. Tries again. Nothing. She refuses to fraudulently push excess accounts onto people. Fired. Worse, Wells Fargo put her onto a permanent blacklist that others in the industry pay attention to - she can't get a job anywhere else.

Imagine making $35k per year in San Francisco - insanely low given the region - and dipping into your own pockets to help fix a situation your own company created. Then, as thanks, being eventually fired by that company for not continuing the practice and being blacklisted in your field of work. I'm desperately hoping Ashley sues Wells Fargo in a defamation suit but I fear the likelihood of that is low - even if it's not the first time it has happened to Wells Fargo[2] ...

At best, upper management were willfully negligent of the impact that their insane sales goals had on the ethics of the company. At worst, upper management were actively trading any ethical notions they could get hold of for money, ripping apart the lives of employees and customers on the way.

Go Elizabeth Warren! Without her prodding, I am sure this would never have happened. He was very happy with firing the 5k employees and not taking any blame on himself for what was essentially his push.

"As chief administrative officer from 2010 to 2011, however, Mr. Sloans role included overseeing Wells Fargos human resources and reputation management. He then became finance chief for three years. And one of his direct reports when he was promoted to chief operating officer last year was Carrie Tolstedt, who ran the offending community-banking division until earlier this year.

That makes Mr. Sloan a member of the inner circle that would have known about the wrongdoing from its early days and tried to deal with it. This group hardly covered itself in glory: It was still handing out pink slips in 2016, five years after the first bankers were shown the door. Mr. Stumpf and Ms. Tolstedt have already ceded compensation for the mess. Investigations by the board and regulators may yet implicate Mr. Sloan and others."

Elizabeth Warren deserves a lot of credit for putting the heat on Stumpf. Coincidentally, one of the leaked Democratic e-mails from this week features banking lobbyists complaining to Democratic party officials about Warren:

The apparent coziness between the party and the banks is a whole other can of worms, but that Warren doesn't hesitate to go against the party grain and attack folks like Stumpf is a nice thing to see. Good on her.

My understanding is that they created a system of unrealistic expectations as far as accounts per customer, had managers from the top that pushed this on people and they got fired.

Then, in the case where employees started reporting that they were forced to do the activity to keep their jobs, nothing was done. This seems more like a lever that can be pulled here:

Prevent banks from having ineffective whistleblowing processes by mandating use of a FINRA whistleblowing program. The person in the top comment being fired would get protected by Finra for her future employability, and Wells Fargo and other banks would take whistleblowing far more seriously. I admit it sounds very simplistic, but is there something crucial I'm missing here? I can't see how one would file charges at the top executives for "incompetantly making an internal ethics line" and "setting too difficult sales goals, causing an unintended fraudulent effect".

Whatever. Will he get a nine-figure golden parachute like the other senior exec who just left? If not, this does not impress me. Will any of his earnings and bonuses be clawed back? If not, this does not impress me. What about the other execs responsible?

TL/DR: The bank probably didn't benefit on net from the fraud, even before the fines. This is more a case of management setting unreasonable sales goals and creating a terrible work environment than a conspiracy to commit fraud against bank customers.

Bravo to the directors for waking up. Any time there's a long-serving CEO of a big public company, it's as if the directors have been chloroformed. They generally show no ability to question the boss, let along hold him/her accountable for anything.

Most of the other boards that should be waking up ... probably won't. But at least they've got a reminder that it's possible.

I watched a lot of his testimony before the Financial Services Committee on CSPAN. So I am unsurprised. He took a fairly savage beating.

For some reason, the fact that Wells Fargo is a really big enterprise cut no ice with the various Congresscritters.

FWIW, I am not a "break up the banks" guy[1] mainly because of things written and said in podcasts by Charles Calomiris, author of "Fragile By Design" along with Stephen H. Haber. It is all more nuanced and complicated than that, and the small bank lobby in the US really is a thing and a thing that has caused problems.

[1] Canada has very close to only one bank and has had no financial crises to speak of.

Honestly I still don't understand why this organization continues to exist. Banks are chartered by the government for the purpose of safeguarding deposits. This bank was engaged in fraud on a huge scale. Their charter should be terminated.

I Couldn't read the article because of paywall. Is there any news on Carrie Tolstedt the woman who oversaw the unit that created all the fraudulent accounts and gets to retire in July at age 56 with a $124 million pay package? This in addition to the $9 million dollar she took home last year.

He's still not actually being punished. He lost his job. Big deal; he's got a golden parachute. He's not going to go through any of the stress that any of the people he fired for this went through when they lost their jobs. He's not getting fined, he's not going to jail. The worst thing that happened to him is he got a tongue lashing from Warren.

Meta: Looks like WSJ is smart enough now to detect the `web` links. I went directly to the site, hit paywall, backed out. Then went via the `web` link, same paywall. Went to `web` link in incognito, was able to read the (fairly anemic) article.

Just curious, what is the benefit of staying with the Zenefit name and brand?

It's fairly toxic, as it is associated with debauchery and fraud. Wouldn't you want to raze the brand to the ground and start with something else, even if you're keeping the same tech stack and sales contracts?

NBC replayed a John Kerry Cyber Security sound byte and the Zenefits logo was in the background. They were a sponsor of the Virtous Circle Conference. Not sure if Zenefits should have been asked to sponsor this event.

"A virtuous circle is often described as a self-reinforcing system that creates positive benefits throughout the economy."

This may seem somewhat unorthodox, but what I would strongly recommend is to start not by reading books on physics or math but read some books on history of physics (and math) first. This will give you some intangible basic knowledge, or a sense, of what scientific research is all about, so that many things that otherwise may end up being puzzling to you when you come to learn the "hard science", won't. One recommendation I can make is Rhodes' The Making of the Atomic Bomb.

Physics is made much harder than it needs to be by the fact that physics pedagogy is generally terrible. Physics texts start by just throwing equations at you, telling you "This is how it is" with no background or foundation about how we know that this is the way it is, or what it means that this is the way it is. There are some very good popularizations out there (like David Mermin's "Boojums all the way through") but very little that bridges the gap between these and "real" physics books. One of the things on my to-do list is to write a book to try to fill this void, at least for quantum mechanics.

> The Theoretical Minimum is a series of Stanford Continuing Studies courses taught by world renowned physicist Leonard Susskind. These courses collectively teach everything required to gain a basic understanding of each area of modern physics including all of the fundamental mathematics.

I've been thinking about reading The Feynman Lectures on Physics recently, but I always thought they were essentially textbooks; I was surprised to see them described as 'popular' here. I remember reading something about their origin, that some universities tried adopting them with the result being that students found them too difficult (and many professors considered the material to be a sort of fresh take on classical subjects).

That's a great list. The only thing that's missing would be a linear algebra course. The OP mentions it in passing, but a good understanding of LA goes a long way. I did UGRAD in engineering, and when I switched to physics everything was over my head, but my knowledge of LA still managed to keep me afloat. Also, matrix quantum mechanics is essentially straight up linear algebra (vectors, unitaries, projections, etc.)

That is, I've always found it fascinating since high school but once you need calculus to understand some of the more advanced stuff I feel that I get lost in the math (which, admittedly, I suck at) and lose the intuition for what's really going on. Then it just becomes a giant math problem that prevents me from seeing the bigger picture.

It's just this problem I've had that I always sweat the small things and sometimes miss the bigger picture or the main concept when I get frustrated that I can't understand the details.

I can't recommend the book, 'Prime Obsession' by John Derbyshire enough. 'Gravity' by Hartle is invaluable and quite accessible. If you have a strong background in calculus, you can also check out 'Gravitation' by Misner, Thorne, and Wheeler.

There is a classical mechanics missing in the grad school section. One of the primary books used is by Goldstein.

I also recommend Classical Dynamics of Particles and Systems by Marion and Thornton.

As was mentioned in another post Linear Algebra is a must, and I think David Lay's book is a great one to start with.

As the author mentions, to learn physics you MUST DO PROBLEMS.

On another note I can't seem to find anyone that has mnemonic techniques for learning equations. So if anyone comes across a good method I'd like to hear it. And I'm not just talking about something like "low d high minus high d low, square the bottom and away we go". But to more complex equations, like memorize "memorize Einstein's field equation." A method that could potentially work for any arbitrary equation.

Learning physics requires more than simply reading text books. A significant portion of actually understanding the concepts laid out in the book is performing demonstrations and experiments in the lab. In college, we had a 3 hour lab each week to go with 3 1-hour classes and each was critical to learning. I certainly admire anyone who wants to learn physics on their own, especially without already having a strong mathematical education, but to really grasp the meaning of the words in a book requires practical exposure in a lab.

For GR, I really liked Bernard Schutz's "A First Course in General Relativity" -- I read it from cover to cover.

Extremely lucid explanations of some very complex topics, and reading it for the first time blew my mind.

This book "teaches" you well (compared to other books where I feel like I really am putting in a ton of mental effort just to learn what the book is trying to say, much like reading mathematics articles on Wikipedia), and it still manages to move fast.

> If you work through the all of the textbooks in the Undergraduate Physics list of this post, and master each of the topics, you'll have gained the knowledge equivalent of a Bachelor's Degree in Physics (and will be able to score well on the Physics GRE).

I am not so confident: the physics GRE is a notoriously difficult test, and is a significant barrier to acceptance to any Ph.d. program.

Smartphone maps are so superior because they are always up to date, can provide real-time traffic data, can tell you to take an alternative route and can provide other data about places to eat, etc.

While a lot of car GPSs have some of this, they are no where near what Google Maps or Apple Maps (or a targeted app like Waze) has. I bought a new car last year, and intentionally didn't get the GPS, even though my car has a 7-inch touchscreen. I would never use those crappy maps when I have my phone with me. If the car supported Carplay, I would 100% use maps through that.

This is why Carplay and Android Auto are the future. Their apps, APIs and data are so superior to whatever car companies can come up with.

This is why Carplay/Android Auto are so key. They're safer than using your actual phone, they offer legitimate maps apps (Google Maps/Bing Maps/Apple Maps/etc), and are somewhat future proof.

Too bad the MirrorLink consortium dropped the ball so epically. MirrorLink arguably does the same thing as Carplay/Android Auto and has been deployed to millions of vehicles, but nobody uses MirrorLink 1.1, why? Because to get your app certified takes tens of thousands of dollars, months, and tons of paperwork.

Carplay/Android Auto literally exist because the MirrorLink group created so many rules, regulations, and nonsense in the name of safety that MirrorLink 1.1 has like twenty apps total after two years(!). So if your vehicle has MirrorLink on the feature list, just laugh and forget it exists, you won't be using it.

PS - MirrorLink 1.0 allowed two way screen sharing, which was legitimately useful. MirrorLink 1.1 is a very different beast, most newer cars and phones only have MirrorLink 1.1 (no 1.0 at all). 1.1 defines things like how big buttons have to be, what kind of animations can play, how many button presses to reach each task, etc. Then everything has to be certified by an independent auditor.

PPS - Most depressing part is: MirrorLink could certify Carplay/Android Auto themselves, and instantly add both to millions of existing vehicles on the road. But they're never going to simply because they're effectively in competition with both.

Another reason not mentioned by the article is map updates. Lexus charges $169+ to update the map data and requires a dealer appointment. A smartphone with google/Apple maps is always more up-to-date.

If you're buying a new car today, the reason to get navigation is for the LCD screen and not for the GPS. The LCD is used to see the rear view camera image for parking. Also, playing music shows the song titles.

Semi unrelated question: Am I the only one who wished navigation apps allowed more precise control of route complexity?

Just a slider that goes between "minimum number of turns" and "most efficient" would be nice. Optimizing a route for complexity vs efficiency is a very important consideration when planning a route. A way to do that in a navigation app would be really nice. I appreciate the intention but taking four extra turns and a one way street or two to save one minute when going somewhere that's one turn off a main road is rarely a good idea. If I'm driving rural state highways for three hours I'd much prefer to go 150mi on two roads than go 120mi on ten roads.

It never made sense to me why you'd want to have a built-in GPS these days when modular devices are available that are A. generally better-designed and B. replaceable. People buying used cars in the coming years will be stuck with these big-screened dinosaurs in their dash that are essentially wasted space.

As a counterpoint to the comments here bringing up the point that the maps on one's phone are always up to date, I'll bring up the point of offline access.

Yes, often the maps are bad, out of date, and require one to go the dealer and pay money to get an update. However, once out of the city, and cellular data ceases to be a thing, Google maps and similar applications built with an assumption of an always-available Internet connection become kind of useless. Yes, you can add maps for the areas you are planning to be to your offline areas, but that requires planning ahead. And if you ended up somewhere you didn't plan to be, well, good luck.

The UI for these in-car navigation systems is bad, but at least you have _a map_, instead of a featureless void with a blue You Are Here in the middle. Personally, I like a map book for such situations. Sure, it's ancient and obsolete technology, is quickly out of date, but the UI is quick an easy to learn, and it doesn't require power to use.

My new Tacoma has a hybrid system called Scout GPS. The idea is that you run the app on your phone, and it does all the heavy lifting WRT GPS, computing routes, etc., and the dashboard display is basically a moderately dumb UI that displays maps on the screen.

In theory, it's a best of both worlds situation; the UI doesn't really need to change very often; what is most important is the maps/POI information, which can be downloaded either in-app or via an update. Practically, the UI is just irritating enough that I end up using Google Maps anyway, and just letting the voice directions get me where I'm going.

Why not buy one of the cheap aftermarket Chinese Android head units? When I was looking to buy a car I specifically looked for double DIN compatibility so that I can later easily install my own head unit. I would not want to support a car manufacturer who is abandoning the DIN standard for aftermarket head units. I want this central piece in the car to be upgradable. A car without double DIN is like a computer case without any slot to put an HDD.

There are very nice units with native Android experience. They usually have either a 7" or 10" screen. There are also some discussions of the various chinese units on xda forums in case someone is here interested...

Native GPS systems are painfully slow, in my experience. I have a Subaru Impreza 2014. Its GPS usually tracks a full block behind we're I'm at. If I'm not paying attention and make a turn based on the voice-guidance alone, it's often a wrong turn. Re-routing takes a long time as well; too long to be useful. Additionally, I have to pay for annual map updates if I want them. To someone's point earlier, all interaction with the map is disabled while the car is in motion. It would be nice if it were unlocked so a passenger could use it.

So I use Google Maps on my phone. It's significantly faster, shows more meta-data relative to my route, e.g., delays and alternate route suggestions, and if it has to re-route it's usually immediate.

Even though my car stereo has pandora on it.. I usually just use bluetooth playback with pandora or skype from my phone as the google maps ux is much better and the directions work while listening.

Honestly, the car ux for entertainment, etc in general feels like something that should have been state of the art half a decade ago... it's too slow, unresponsive and irritation imho, depite positive reviews.

For references it's the fullscreen uconnect interface on a 2016 dodge challenger. Also, the fact that it's 3g instead of lte on the antenna makes the mobile hotspot option worthless and not even a consideration.

I have looked at the comments here and everyone is talking about the superiority of their phones and how carplay and Android auto are so much better.

And this leads me to having a Steve Jobs moment: I wish the car dash computers would work as simply as smart watches where they are just dumb remote controls for phones.

I don't have Android auto or carplay. But I have used moto360. And I find that is all I really ever need when driving. The voice recognition is awesome. The interface is simple. And like all simple things it leads the mind to wish for more.

It seems like the car manufacturers tried to swallow too much at once and sucked at all of it.

That was my biggest problem when I recently bought a car -- almost all of the ones with the options I wanted (out of dealer stock) also included the $800 manufacturer's GPS. Which is something completely useless to me since I could get by perfectly fine with Android Auto. So I ended up paying for a feature that I would not only not use, but was such inferior quality (it can't locate either my home address, or work address -- for home, it routes me to a house several miles away, and for work it sends me to the movie theater 3 blocks east).

After reading the article, I think the non-clickbait title should be "Most drivers who own cars with built-in GPS systems _sometimes_ use phones for directions".

I am pretty happy with my built-in system - it is fast, always on, integrates well with the car, provides dual-screens (in dash and bigger console screen output), free quarterly OTA map updates, uses Google for POI search, live traffic. Most important I don't need to fish my phone out of pocket, mount it somewhere and connect cable for longer trips.

Yet I am also sometimes (1%?) use phone for directions. Most of the time it is when I need to lookup something nearby and I am not in the car.

I cited my experience with the new Rav4 satnav in a recent blog post [1] on the difference between "requirements" and "needs". Tweeted it with "If you've lost something in your modern Toyota, it's probably hiding behind a settings menu". The current version isn't terrible; the version the car was delivered to me truly was. Inexcusable. Although there are benefits to integration, unless the delivery model changes to a more phone-like one I can't see myself buying an built-in satnav again.

The Audi navigation works well sometimes, and I really like the integration into the car (using the most up to date maps, they give free map updates for 3 years).

However, the routes are not the most efficient, and although it supposedly uses Sirius for traffic, the data isn't very accurate. Google Maps often gives me a much more efficient route that can be twice as fast given current traffic conditions, but is more "unusual".

Audi's GPS likes to stick to main roads that become heavily clogged. And even though I have an update that came out within the last 2 months, it still doesn't have a major construction project that has been completed for a year added, and always tries to route you around it.

So basically I use Google Maps for navigation within the city or areas that I'm familiar with, and Audi's navigation for long trips (ie inter city highway trips) where the route will mostly be the same on both systems.

That being said, the GPS in my car ALWAYS gets an accurate lock right away. My phone (Nexus 6P) has an awful GPS that doesn't work in hilly areas or around tall buildings and constantly loses the signal. It barely works inside the car, and even then the accuracy isn't great. Guess the all metal phone really kills the GPS signal, since the Nexus 6 had an amazingly good GPS with high accuracy in the same conditions.

It's really simple - how hard is the UX of my phone vs. the car's system. I've _never_ seen a builtin car version have a UX that wasn't ridiculously complex and frustrating to use.

There are two concrete, real examples, of a routing UX that anyone can just try out - just grab an android phone or an iphone and directly copy the experience. Just sit down and try the most basic goal of using a nav system - enter and address and start and do it on your car system and on either phone.

Oh, wait, differing goals. I, as a user, want a usable system. The maker of these awful nav systems have an entirely different goal. They are trying to sell an "option" upgrade to car makers who then include it in "option packages" with the car... usability is way way down the list after bells/whistles/marketing/subscription sales

As much as the size, placement, and car integration of my vehicle's (2016 CX-5) dash GPS is ideal, the fact that it cannot display Waze renders it mostly useless for my everyday commuting. Until I can do so via Carplay or Android Auto, I'll be using gaudy cell phone mounts.

If Waze came to Car play and android auto I imagine a lot of people would use it. The only navigation I currently use is Waze and I use it every single time I'm driving, even if I do not need directions. The ETA, routing, and the alerts are useful even if you know where you are going.

I don't. My 2013 Prius has an excellent GPS and I'm not too concerned that the maps are a bit out of date. I still get accurate traffic data from XM.

There are advantages to the built-in navigation - it works when there's no GPS signal due to inertial navigation (compass and tire rotation). Mine has a head-up display so I can see navigation cues without taking my eyes off the road. The voice synth is much better than Apple Maps or Google Maps.

Reading the comments I appear to be the exception. I lease a 2015 BMW 4-series which I added "Professional Media" to and the nav works really well. I've driven plenty of other cars where the nav is useless, but not all are bad.

Reasons to use it over my phone:

* 11" screen shows me a high quality map overlayed with current traffic, with 1/3 split for lane guidance when needed

* Directions also shown directly on the dash - no need to look far away

* Spoken instructions dim the music volume

* Pretty accurate traffic information with automatic OTA updates

Downsides:

* Voice control is shitty; saying "set destination to leamington spa" will change to a random radio frequency

* Slightly awkward to send routes from PC to car, has to be via Remote Control App

When I bought my latest car, navigation package included few safety features I wanted to get, so I had to get them to get somewhat safer car. My car -- Mazda, comes with free 3 years of map updates. I am yet to update it, because I'm lazy. It also comes with optional paid for traffic service. Why do I need to waste my money on that, when I can get free (I know, I know) Waze, Apple Maps or Google maps which will get me where I need to get avoiding traffic. Despite driving 2016 model year vehicle, I GPS/entertainment center feels like from 2010 or so. So yes, I use my phone for navigations most of the time.

You can't buy car features a la carte like the "build your car" tool on every auto maker's site suggest you can. You're forced to buy the "tech package" which includes the GPS because you want something as simple as bluetooth

Haven't tried built-in GPS, but I prefer using my phone over my standalone GPS because it's much faster. My standalone unit takes forever to boot up, has too-long delays when navigating between screens, and overall is just frustratingly sluggish when trying to use.

I can't use my car's GPS because the car is from Japan and it has Japanese language only and only maps of Japan. And you can't just download map of any other country, and you can't even change the language.

So, yes, I'd better use my phone's GPS: maps are always up-to-date, I can choose between Google Maps and Apple Maps (or any other), can select any language.

My car's map was so out of date when I bought it it instantly became useless to me. My wife sitting in the passenger seat cannot select destinations while we are driving so it is far easier to just use a phone with the added benefits of up-to-date maps and traffic.

Really the thing you want here is for your smartphone to have access to the car's GPS receiver, which is huge and sensitive and can use all the power it wants. The phone can just treat it as a peripheral.

Because built-in GPS is terrible. I recently bought a 2016 Mercedes-AMG vehicle that has sat nav as standard(on non-AMG models it's a $1000 (!!!!!!) option). The satnav is made by Garmin, and it's just awful. Truly horrible in terms of speed, usability, and it only gets updates once a year(you get a new SD card when you go for the annual service).

In the meantime, my TomTom 5000 satnav is still unbeaten - free lifetime upgrades, clear, fast interface, free and constant internet connection in every country of the world, with accurate traffic and speed camera updates. And it only cost me ~$250 new.

I just don't understand why anyone would get a built-in satnav over a dedicated device.

The quality of phone based GPS directions has improved drastically in the last 3 years. I had a cutting edge GPS about 5 years ago in my car that did better than my phone. Now there's no comparison and I wished one could simply mirror their iphone/android display into the car's touchscreen without the pageantry of needing an Android Auto/iPhone.

I've always hated built in GPS for two reasons. First and most important - the damn maps are always viewed from directly above the car so the map is 2D and North is always at the top. There is a lot of effin' mental work to try and figure out which damn way your car is pointing and whether you need to turn left or turn right because you may be driving "down" the map and shit is reversed. On my phone or on my garmin I get a perspective view of my car and it's position on the earth. There is no confusion about which way I actually need to turn.

Secondly, the vast majority of built in GPS is on a console a good 6 to 12 inches below the windshield. Meaning I have to remove my eyes far off the road in order to look at everything. My garmin mounts to my windshield or dash. It is a quick glance and then eyes back at the road.

Builtin GPS was pretty much the only requirement I had for the last few cars I got (both VW) and tbh I'm not that disappointed. The only thing really lacking is decent search, once you have an address to input it's mostly ok, but it's true that the temptation to just say "fuck it, I've found it on Maps, let's just use this" is there. If there was an easy way to copy or "stream" the address to the car via bluetooth, I think most people would use it.

It looks like the field is in flux anyway, VW keeps dramatically changing the UI in every new car.

This article was written before UTF-8 became the de-facto standard. According to Wikipedia, UTF-8 encodes each of the 1,112,064 valid code points. Much more than Goundry's (the author's) 170,000. Goundry's only complaint against UTF-8 is that at the time, it was one of three possible encoding formats that might work. Since it has now been widely embraced, the complaint is no longer valid.

In short, Unicode will work just fine on the internet in 2016 as far as encoding all the characters goes. Problems having to do with how ordinal numbers are used, right-to-left languages, upper-case/lower-case anomalies, different glyphs being used for the same letter depending on the letter's position in the word (and many other realities of language and script differences) all need to be in the forefront of a developer's mind when trying to build a multi-lingual site.

UTF-16, and non-BMP planes, were devised in 1996. The author seems to have been 5 years late to the party.

> The current permutation of Unicode gives a theoretical maximum of approximately 65,000 characters

No, UTF-16 enables a maximum of 2,097,152 characters (2^21).

> Clearly, 32 bits (4 octets) would have been more than adequate if they were a contiguous block. Indeed, "18 bits wide" (262,144 variations) would be enough to address the worlds characters if a contiguous block.

UTF-16 provides 21 bits, 3 more than the author wants.

Except they're not in a contiguous block:

> But two separate 16 bit blocks do not solve the problem at all.

The author doesn't explain why having multiple blocks is a problem. This works just fine, and has enabled Unicode to accommodate the hundreds of thousands of extra characters the author said it ought to.

Though maybe there's a hint in this later comment:

> One can easily formulate new standards using 4 octet blocks (ad infinitum) but piggybacking them on top of Unicode 3.1 simply exacerbates the complexity of font mapping, as Unicode 3.1 has increased the complexity of UCS-2.

They would have preferred if backwards-compatibility had been broken and everyone switched to a new format that's like UTF-32/UCS-4, but not called Unicode, I guess?

Man, UCS-2 is the pits. I still remember fighting with 'slim-builds' of python back in the day.

Any critique of unicode while not assuming UTF-8, which allows for more than 1 million code points) is a bit suspect in my opinion. The biggest point against UTF-8 might be that it takes more space than 'local' encodings for asian languages.

This is probably the most important research direction in modern neural network research.

Neural networks are great at pattern recognition. Things like LSTMs allow pattern recognition through time, so they can develop "memories". This is useful in things like understanding text (the meaning of one word often depends on the previous few words).

But how can a neural network know "facts"?

Humans have things like books, or the ability to ask others for things they don't know. How would we build something analogous to that for neural network-powered "AIs"?

There's been a strand of research mostly coming out of Jason Weston's Memory Networks research[1]. This extends on that by using a new form of memory, and shows how it can perform at some pretty difficult tasks. These included graph tasks like London underground traversal.

One good quote showing how well it works:

In this case, the best LSTM network we found in an extensive hyper-parameter search failed to complete the first level of its training curriculum of even the easiest task (traversal), reaching an average of only 37% accuracy after almost two million training examples; DNCs reached an average of 98.8% accuracy on the final lesson of the same curriculum after around one million training examples.

Very exciting extension of Neural Turing Machines. As a side note: Gated Graph Sequence Neural Networks (https://arxiv.org/abs/1511.05493) perform similarly or better on the bAbI tasks mentioned in the paper. The comparison to existing graph neural network models apparently didn't make it into the paper (sadly).

I'm probably totally off base here (neural networks/AI is not my wheelhouse), but is having "memory" in neural networks a new thing? Isn't this just a different application of a more typical 'feedback loop' in the network?

I'm currently evaluating LibreSSL for use in data protection software I licensed to a large company.

The optional libtls API bundled with LibreSSL is a really simple wrapper API that is secure by default. And it was a breeze to build on Windows because they use cmake (just need to download released bundle rather than from git to avoid problems.) A couple of the optional libtls functions don't work on Windows (tls_load_file), but 100% of OpenSSL-1.0.1+ api functions I tried so far worked fine.

For me, the biggest downside is LibreSSL doesn't support X25519 yet, while BoringSSL and OpenSSL both support it. And BoringSSL is starting to get easier to use with other software like nginx without messy patches.

Hopefully, X25519 will be added as a beta feature during LibreSSL 2.5.x and released as stable in 2.6.

I'm glad that VoidLinux isn't anymore the only distro in town that's switched to LibreSSL. And with Docker defaulting to Alpine, more OpenSSL/LibreSSL compatibility fixes will trickle into upstream projects. This is good news.

EDIT: Next, I predict a linux distro will, after having switched to musl, also support llvm/compiler-rt/libunwind/libc++ as base toolchain instead of gcc/libgcc/libstdc++

I recently built LibreSSL to replace OpenSSL on my laptop that runs ArchLinux. After installing, pretty much every thing works seamlessly so far. I rebuilt python because apparently the ssl module looks for RAND_egd (or something of the vein that LibreSSL has removed - and I didn't compile it with a shim). Other than that, dig is broken on my system ("ENGINE_by_id failed") although I have not bothered to fix it since drill works fine.

It's nice to see LibreSSL being picked up by Linux distributions. I wish other major distros did this (I'm looking at you Debian). IIRC, Alpine was often used to built docker images. If that's still the case, I'd say it's good news.

It's worth remembering that OpenSSL has faithfully served the community for many years. Most of those years with almost no financial support. Few projects would survive the scrutiny they have undergone. These guys deserve some credit.

How has LibreSSL stood up lately to the relatively frequent CVEs in OpenSSL the past few months? I know the initial months were a frenzy of removing garbage and classes of problems (#yadf) that preempted a few CVEs, but I haven't been paying attention to the commit logs to know if it was also susceptible to them.

I tried libre on OS X (it's on Homebrew), the binary is around half the size of the equivalent OpenSSL release. Kudos to Libre for stripping out so much junk - OSs that people don't use and ciphers that people shouldn't be using - and producing a more auditable sensible codebase.

I hope LibreSSL gets the traction it needs in the coming months or years. There are great wins using it and it is worth to move over the FOSS stack to use LibreSSL instead of OpenSSL and reduce the number security bugs we are facing today.

As an old time OpenVMS user (on VAX DECstations first and on AlphaServer later) I'm looking forward to this.

I doubt it will ever come even remotely close to a "mainstream" OS, but Alpha to x86 is a much better migration path than Alpha to Itanium.

And yes, there are still many OpenVMS installs out there in the wild, from airport logistics to assembly lines, so this may make sense from a financial standpoint (versus a complete software rewrite for Linux/AIX/whatever) - provided they can market it well enough to the old users.

I work with an old DEC guy who co-wrote a book about the BLISS compiler back in the day. He thought it made a much better "portable assembly language" than C.

He also likes to talk about the binary compatibility on VMS; e.g., VAX binaries that run unchanged on Itanium. Unfortunately, that commitment to compatibility has wavered a bit since HP offshored VMS maintenance. We've had to work around some breaking changes recently.

If I'm understanding these slides correctly, AMD64 VMS would require recompiling VAX, Alpha, and Itanium applications from source. I kind of see a chicken-and-egg problem: application vendors won't port without demand, and customers won't adopt without applications.

long live DEC OpenVMS, it's really rare to see an article on hacker news about OpenVMS, but it has happened before. long live DCL, ASTs. i started work in 2000, but was still able to play around with vax and alpha clusters, sometimes if i recall correctly, a mix of vax and alpha nodes in the same cluster. OpenVMS was such a stable operating system, by 2000, most of the good DEC engineers had left already for companies like Microsoft (Dave Cutler, etc), but that legacy code was still quite amazing. long live ZKO and DEC, i learned a lot from that job, great to see this coming, but not sure if it's still relevant these days with all my development on linux. could you imagine running this port on aws, like some customer migrating to the cloud all their legacy infrastructure? that would be such a niche market.

Why not translate the PDP-11 or VAX binaries to x86-64 and emulate the memory-mapped device interfaces? Seems like less work, and more likely to produce a cheap solution for legacy installations worrying about how to run an ancient critical application on old gear.

I do believe that MACRO in the slides refers to MACRO-11, a PDP assembler. Or maybe MACRO-32, for VAX. In either case, they have been compiling this "assembly" language for Alpha (and now I guess Itanium) since VMS has run on that architecture.

A nice little dash. Surprisingly, I found the most valuable part of it is honestly just splitting stdout/stderr, as the rest of the data is relatively easy to access (except event loop lag, but there are packages like toobusy[https://www.npmjs.com/package/toobusy-js] to help with this).

It doesn't work in combination with nodemon or babel-watch (yet), but the code looks very clean & simple so I assume it'll be an easy update.

Is there anyone interested in improving the UX of the stock nodejs in-terminal debugger but who just needs money to do so? Because the experience of "run this single test and pop into the debugger at line 14 to poke around" has several warts that I'm surprised haven't been fixed yet:

1) Having to call `cont` at the beginning rather than the debugger stopping at the actual place where the `debugger` line is placed.

2) Having to call `repl` in order to start printing things

3) After calling `repl`, not being able to get back to the mode where you can go to the next line and jump into functions.

4) If jumping into an express.js route, inspecting the request sometimes just results in the message "no frames".

It's a known and deliberate shortcoming of many licenses (e.g. BSD) not to include patent stuff because it makes everything unnecessarily complex. There was recently an article about why BSD and MIT are so popular, and it's because they're concise and understandable. There is a reason WTFPL exists and some developers resort to it as a way to avoid legalese.

Facebook clearly was aware of this "shortcoming" and being a big player, they might have wanted to be nice and say "we won't sue you for patent infringement if it turns out we have a patent on something React does". Then the managers went "but what if they sue us? Patents are not only for offense but also our defense, we would weaken our defense." And so the clause of "except if you sue us first" came into being.

And now this fuss about the patent part making it not an open source license? Oh come on.

I really don't like Facebook as a company, but this bickering is silly.

Version 2 of the Apple Public Source License includes the following termination clause:

12.1Termination. This License and the rights granted hereunder will terminate:

(c)automatically without notice from Apple if You, at any time during the term of this License, commence an action for patent infringement against Apple; provided that Apple did not first commence an action for patent infringement against You in that instance.

Like the React patent grant, this applies to any patent suit, not just ones that allege that the covered software infringes. The Open Source Initiative considers APSLv2 an Open Source license, and the Free Software Foundation considers it a Free Software license. Note that this clause terminates your copyright license, not merely your patent license - it's significantly stronger than the React rider.

So I think the claim that it's not open source is a bit strong, even though I find this sort of language pretty repugnant.

It comes down to two questions (quoted from the linked question) - note that those are questions, not assertions:

1)

> ... if we use any of Facebook's open source projects Facebook can violate *our patents* (of any > kind) pretty much with impunity: If we try to sue them we lose the right to patents covering their > open source projects(?)

2)

> I have read opinions that other open source projects that don't have such a clause, for example > those from Microsoft or Google, nevertheless have the exact same problem, only that it isn't > explicitly stated. Is that true? Is my situation not any better when I only use open source > projects without such a clause?

I think that is a good point. The many opinions I see are almost all from people who don't have their own patents to think about, but what happens if you are a company and you do? Would you basically allow Facebook to use any of your patents, because for all practical purposes you can't defend them if you rely on their open source projects?

So, assuming you actually have a patent, and Facebook actually decides to infringe on that patent, the worst case scenario is that you lose your license to use React. There are principles of fairness and equity in the law that would allow you to stop using React in a reasonable amount of time. So write your frontend in Elm. It probably needed a rewrite anyways.

I'm not sure why they didn't just use the MS-PL it sounds like the same thing? I don't understand why use BSD if the MS-PL achieves what they want, and is backed by Microsoft (surely it would be in their best interest to defend their own license).

Do people really think Facebook developed and released React for the sole, or even primary purpose of gaining patent rights? It's preposterous that so many top engineers would be working on such a goal.

It seems obvious that Facebook just has some overly cautious lawyers. I highly doubt that means Facebook is going to use your usage of React as an excuse to steal your patents.

I constantly find the need to read up on licensing. Usually with various blog posts or online information, which never gives me the feeling that I fully understood the legal implications or the context.

Can anyone recommend a book covering software licenses in depth? (ideally not only US centric)

Facebook's license is even weaker than a BSD/MIT license without any PATENTS license attached at all. Because in that case the patents grant can be considered implicit, depending on jurisdiction. By including a PATENTS license in that repository, Facebook nullifies the possibility of such a defense.

One interesting thing to note is that the actual license makes no specific references to the patents rider, and in fact the patents grant rider is a separate file completely. Does that mean that I have to follow it, if it's not directly in the license? If we look at the license as a contract, shouldn't it be in the license directly, even if it's referencing something outside?

". But Ive never heard any lawyer postulate that that document does not grant a license to fully exploit the licensed software under all of the licensors intellectual property. Anyone who pushes that view is thinking too hard."

Nobody has pushed this view.

However, the author seems to miss that such rights are likely not sublicensable, because they are implied, and implied rights are pretty much never sublicensable.

That is, i may have gotten the rights. That does not mean i can give someone else the same rights.

Now, there are other possible principles, such as exhaustion, that may take care of this (it's a grey area)

But it's definitely not the case that implied patent rights are somehow going to be better than an explicit grant.

They are for people using software.They are not for people distributing software.

I worked with a development manager headhunted from Microsoft, who was quite worried about simple "Taint" from open source software (i.e. ideas gained from viewing open source code making their way into closed source software). I also worked with a company which wouldn't accept code contributions to their OS project; they would do clean room implementations to avoid the legal hassle of incorporating code which wasn't written for hire. So I can certainly see large companies being leery of utilizing software with licenses which don't include patent grants.

Perhaps it's less of an issue with the BSD style licenses, as explicitly called out in the article.

Frankly multiple developers don't really care about the licenses or clauses. Software veterans and corporations care about it more than anyone else. Even the article is kinda hard to grasp in one go that I had to read a couple of times. To find the nuances in an OSS license and to think and act on it is not easy for a lot non native English speakers. And more posts like these are needed to make many people read and know about these serious issues.

It turns out companies are now "bastardizing" the license terms. I would love for the OSI to re-evaluate if these licences are truly open source. Open source covers freedom, and I should think that these clauses abridge that freedom since it is very well possible for a company to be required to sue Apple or Facebook over patents. If that unrelated lawsuit "strips" your legal right to use software, that is NOT freedom.

if a person offers two licenses, and you only need to accept one of them to be licensed, then you should be all set.

If I sue facebook and they countersue, then my defense is simply "i am licensed under BSD". The fact that you offered an additional license (they even call it "additional") does not mean that I am required to accept it when the first license stands alone.

This has been a major issue for me from the get go, it goes against open source culture but that's no surprise because that is what Facebook loves to do (which it has consistently proved).

They pick something upcoming, recreate it injecting their ideals while knocking the original. They then release it to a sea of "pseudo developers" that latch onto it with the "well it's good because Facebook" mentality which aggressively defend it giving them more leverage.

Then they rinse and repeat until they have replaced everything the community has created with their equivalent instead of contributing back to those projects like a true supporter of open source would.

Open source is much more than having code on a repo, it's a culture that Facebook is hell bent on "changing".

Unrelated to the actual content of the page.. but why does this site require 1.25Mb of javascript to load? It makes up nearly 70% of the entire page, and is responsible for almost 60% of the requests needed to render the page. Do you really need to use that much javascript just to render a blog post. Why?

Those webfonts also took multiple seconds to retrieve leaving the page essentially completely blank to visitors until their browsers finally pulled them down. It paused long enough I was wondering if HN had sent sufficient traffic to bring the site down.

Similar approach to the patchwork of various patent grants on Opus implementations; it's still open source, it just might not be free for 100% of the purposes you could think of.

I think Robert doesn't understand that open source refers to the source code being open to use, derivation, and study. The BSD license also includes a warranty disclaimer, which is the exact same kind of protective language as the patent grant. The Facebook arrangement meets all of those requirements with the one stipulation that you forfeit the license when you enter patent litigation against Facebook for a counterclaim the granted patents or primary litigation for unrelated patents. I don't consider countersuing Facebook for patents applying to React, while USING REACT, to be a serious fundamental software freedom.

So this attorney's problem is that rather than ambiguously granting a license to the patent claims necessary to implement this software, they decided to explicitly grant such rights? I see no problems.

Other Open Source licenses have patent termination provisions. Apache 2.0 (which React used to use) says "If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.". There are other licenses as well that include more extensive patent termination provisions, such as the APSL.

The argument that "This is Not Open Source Software" feels unsupported and very sloppy.

> Thus, the licensee pays a price to use the library. It is not a price paid with money. [..] I could be missing something, but I have never seen any other software license use such a condition and also claim to be an open source license.

This just isn't thinking creatively. The GPL also requires a "price to be paid, but not with money" -- you give up your right to keep changes you make secret (if you distribute them). Yet no-one seriously argues that the GPL isn't an open source license.

If there is something about giving up the right to file patent lawsuits that is totally different to giving up the right to keep your changes secret, the article doesn't say what that difference is. Giving up the right to keep your changes secret is surely more stringent than giving up your right to file patent infringement lawsuits against one company. Why, then, should the latter be a dealbreaker for an open source license?

I've used and loved bitbucket for years for private project hosting, but for the occasion where I need to put more devs on a particular project I'm not sure why I would pay for it now over using gitlab for free. It might be different if it was 5 users per project for free, but 5 collaborators across all of my projects is just too limiting - especially when I can get everything bb has and more from gitlab.

At the same time I understand bb needs to make money - hopefully they'll make enough to keep up the competition.

I was recently looking at BitBucket Data Centre, to move our in-house git server to something with better clustering & high-availablilty & failover (rather than doing it ourselves). I was a little disappointed to find out the HA features are provided by storing your repos on an NFS server, and detaching/attaching it to the primary node.

Congrats on the raise, but I'll be honest it really rubs me the wrong way that they renamed to "Serverless Framework" from JAWS. They should have picked a different name. The word "Serverless", for better or worse, was what the community settled on. Lot of companies and individuals use it in many different ways and have before this project existed. But they are trying to own it[1]. Not being a good community member IMO.

Edit: An example of the kind of thing I am concerned will happen[2]. Use the word "serverless" in a project? Get a take down notice.

Are there other examples of an open-source community project taking on several million dollars in venture capital funding? It seems a bit odd to me.

I'm not sure I'm comfortable using a presumably open-source, free (beer and speech) tool knowing that the group behind it will have to find a way to monetize their users in order to justify the investment from VCs at some point down the road. Open-source developers should of course be able to be compensated for their work, and the project has to find a way to sustain itself (I work for a company whose main product is open-source so I know this better than most), but the venture capital model doesn't seem like a good fit with the interests of the community, in my opinion.

That said, Serverless is a great tool, and congrats on the 1.0. Thanks to the team for their hard work.

I recently started using APEX. It doesn't rely on CloudFormation and has supports for hacking Golang support. Its worth a look if your getting more serious about Lambda development and interested in other options. http://apex.run/

Neal Stephenson wrote a bit about this in Cryptonomicon; laying new undersea cable is both expensive and time consuming, but the cost of cutting existing ones is fairly low. So if any sufficiently-funded individual, corporation, or nation-state wanted to hold a gun to the world's head, cutting undersea data cables wouldn't be a bad way to do it.

The problem is, you can't make that kind of threat in a subtle way, so to consider something like it you would have to be some kind of international pariah with a warmongering streak and a history of 'lying in plain sight' about your own nefarious deeds.

Edit: Okay, we're in better shape today than we were in the '90s and cutting off Cyprus' internet wouldn't cripple the world, but we still don't have THAT many cables running across the Atlantic and Pacific.

Russia is doing a lot of fearmongering in the run-up to the US elections. I stumbled across some agitprop on Twitter yesterday, where I read that Russia was recalling all students who were studying abroad. Out of curiosity (to say the least) I tracked down the source article, which was a) not available in English, and b) just Russians criticizing privileged Russians who sent their kids abroad to study. Looking around the rest of the agitprop stuff that I could find at a cursory glance, there is a ton of literal FUD going around right now, to the tune of "WWIII imminent."

There is no evidence that the Russians are "tapping" or "sabotaging" undersea cables; the Americans and Israelis would also tap those cables, and I think it's far more likely the Russians are doing counter-espionage of taps on Turkish, Syrian and Lebanese cables, to win favour with their governments.

Ships do not "loiter" in international waters. "Loiter" implies the loiterer is violating some rights the accuser imagines himself to have. Russia may be simply legally installing its surveillance equipment on cables it found in international waters. There's no 'loitering' involved here.

I invite you to study the speeches of Anson Chan, less you believe that this irritating political weasel-wording we are so used to currently is universal.

Nate Silver at 538 talks about the whole dissecting polls deal, or "unskewing" them. All polls must make methodological choices, and all of those choices have advantages and disadvantages. Spending a lot of time trying to dissect those choices and passing judgment on them is not as productive as:

1. Looking at aggregates of lots of polls.

2. Looking at the variance that a poll captures from one iteration of it to another.

Or at least, so he claims. Obviously, he runs a poll aggregator, using a model that heavily weights the trendline of individual polls, so he has a dog in this fight.

One thing I noted is that NY Times says that most polls have four categories of age and five categories of education; except these aren't categories, they are ordinal variables.

Age and level of education are slightly co-variant (you don't get many 18 year olds who have a PhD). Because the age classification and education levels are ordinal you should use an ordinal smoothing [0] function to turn them into pseudo continuous variables. Given the continuous and co-variant independent variables (as well as other categorical independent variables) and a categorical dependent variable the best analysis is probably to use a quadratic discriminant analysis (QDA).

This is really fascinating. I get why the poll creators made these decisions, but the results of the weighting lead to a ridiculous result compared to other polls. Supposedly this poll was extremely accurate in 2012, so who knows?

I'm not sure if he doesn't know his role, and I'm curious how tracking polls like this try to account for the large media attention paid to the poll and its methodology. This guy is known to stats nerds, and they've been tracking his moves and (rather mean-spiritedly) calling him "Carlton" for a while now.

TLDR; The 19-year black man is very small demographic and would have small sample size. Apparently, the sample for LA Times poll includes and outlier who favors Trump which then gets weighted disproportionately to arrive at conclusion that trump is favored by young black voters.

Regarding the Times' decision to run this article, I wonder how much of it was based upon "hey, polling is kind of goofy" and how much of was "look! Here's another way we can show that Trump isn't really resonating with voters!".

Cute as a Javascript hack, but not going to compete with Vocaloid or Festival Singer.

Somebody really needs to crack singing synthesis. Vocaloid from Yamaha is good, but it works by having a live singer sing a prescribed set of phrases, which are then reassembled. Automatic singer generation is needed.

Figure out some way to use machine learning to extract a singer model from recorded music and generate cover songs automatically. Drive the RIAA nuts. Get rich.

Way back in the mid-1980s in the United Kingdom, and there were few places more 80s than that, Superior Software produced Speech!, a software speech synthesis program for the BBC Micro, a 6502-based machine running at 2MHz which didn't have PCM audio. It could reasonably reliably read out ordinary English text in a fairly robotic voice.

It was an utter sensation (featuring, among other places, as the computer voice in Roger Waters' Radio Kaos).

It's obviously not going to win awards, being barely intelligable, but if you can achieve that with a table of 49 phonemes each of 128 4-bit samples, then producing basic speech isn't that hard. I think that mespeak.js, which is what this demo is based on (which is pretty cool, BTW) is based on the same principle, although with obviously better samples.

Project author here, just want to say thanks gattilorenz for sharing (was quite the pleasant surprise to see this on the front page!) and everyone for the feedback + fascinating projects, ideas, links etc. Really cool to see so much enthusiasm for speech+singing synthesis and Web Audio!

It's been a good year for the English singing synthesizer world, with the launch of chipspeech. (https://www.plogue.com/products/chipspeech) But I'm pretty interested in whether more realistic singing synthesizers will be made, since there are a few recent new voices by Acapela Group and others developed for non-singing speech.

This is great! I'm in the very early stages [0] of creating a framework to automate and control physical instruments through hardware & software. Never thought voice would be possible, I'll have to check out integrating this! Thanks!

This is great, nice job! I'm working on a midi player in JavaScript; it would be interesting to use this as the sound font. Maybe assigning certain words to certain pitches. https://github.com/grimmdude/MidiPlayerJS

A HID-based attack has to spawn a terminal and very quickly inject a set of commands that is very visible but only for a short period of time. Once the attack has been carried out, there is nothing left to see, so this type of attack is less obvious than the social engineering one.

MEMS microphones are tiny. It should be possible to combine data from that and a light sensor, that would make an HID based attack far less likely to be detected. The frequency profiles of keystrokes and someone pushing away an office chair should be fairly easy do discern. You'd want to make it more likely that the attack would occur after someone had left their desk. (Hopefully with the machine unlocked.)

EDIT: Another idea: The USB key uses autorun to pop up what looks like a spammy ad for a PC "cleaner" utility, or something you'd expect on a USB key conference swag item. It's actual purpose is to cover up the shell's window, or to contain the exploit itself.

A really sneaky method would be to create a USB suppository, that looked like a large USB key but would leave the active part inserted inside the socket after the target pulls out the seemingly defective key.

A zero-day key wouldn't have hardware costs that would be significantly different from an HID key. A lot of microcontrollers will gladly announce themselves to be whatever sort of device class you want them to be, and which vendor and device IDs to use. Also, if you're worried about bulk, putting an LED or something that flashes and looks pretty would also lower peoples' guards as to why it's so big. USB hacking is an area where there are wide open fields to play with.

The reference is to the existence of /dev/tcp when using the "Bourne again" shell. Some other large shells, and gawk, have this "feature" as well.

Then I noticed he is head of something technical at Google.

We are always reading about the rigor of this company's interviews in testing candidates for practical knowledge.

I guess knowledge of important capabilities of widely/universally installed software is not something they are testing for?

I mean, I am sure there are probably hundreds of employees there who know these things. And they have some legendary programmers on the payroll. It is like a miniature Hall of Fame of computer programming.

I am not even sure what this all means, but I find it interesting to see the gaps in knowledge considering jobs with this company are so highly sought after.

And they are entrusted with protecting an enormous quantity of other people's data.

During development, I have to check the browser's inspector periodically to see what my console.log()'s are saying. This leads to having two browser windows open: The browser and the inspector. And in the inspector, I usually only need to see the console. With these desktop notifications, I can develop and debug web apps with just two open windows: A single browser window and a terminal. And it's only adding ~100 lines to your project.

Very neat. Plan to take a look at the source code later today. I was thinking of doing a chrome extension that accomplished the same goal, but just used a small pane on the bottom right instead of notifications.

I really want pypy to be the future of Python along with python3. I haven't used it in a while but the performance improvements I could get by switching from python somecode.py to pypy somecode.py were magical.

So what is the best way I can help support pypy3? Are there any easy issues I can help contribute to?

I am new to the python ecosystem. I use python for solving problems and hackerrank and noticed that switching from python3 to pypy3 can mean the difference betwwen timing out on some problems v/s getting it to work.

Can someone eli5 how pypy achieves such difference and why that improvement cannot be contributed back to python3?