Anecdotal: there's a few different approaches to learning songwriting that seem to click for beginners. The "build up" approach is the most common and is what this link offers: It first teaches beats, then chords, then melodies and then, in theory, vocals etc. These lessons in this order make sense to many people, but not everyone.

If you're interested in learning to make music and the lessons in the link are confusing or overwhelming or boring, some students find a "peel back" approach to learning songwriting easier to grasp at first. A peel back approach just involves finding a song then teaching by stripping away each layer: start with stripping away vocals, then learn melodies, then chords, then finally learn about the drum beat underneath it all. A benefit of the peel back approach to learning is melodies and vocals are the memorable parts of a song and easiest to pick out when listening to the radio so a student can learn using songs they know and like. Either way, songwriting is hard and fun. Best of luck.

P.S. I think Ableton makes good software and I use it along with FL and Logic. They did a solid job with these intro lessons. But worth mentioning, there is free software out there (this includes Apple's Garageband) that offers key features a beginner just learning songwriting can practice on and mess around on before purchasing a more powerful DAW software like Ableton.

I always wondered why musicians keep up with the conventional musical notation system, and haven't come up with something better (maybe a job for a HNer?).

I mean the conventional music notation represents tones in five lines, each capable of holding a "note" (is that the right word?) on a line, as well as in between lines, possibly pitched down and up, resp., by B's and sharps (depending on the tune etc.).

Since western music has 12 half-tone steps per octave (octave = an interval wherein the frequency is doubled, which is a logarithmic scale so compromises have to made when tuning individual notes across octaves) this gives a basic mismatch between the notation and eg. the conventional use of chords. A consequence is that, for example, with treble clef, you find C' in the top but one position between lines, and thus at a very different place than C (one octave below) visually, which is on, rather than between, an additional line below the bottom-most regular line.

I for one know that my dyslexia when it comes to musical notation (eg. not recognizing notes fast enough to play by the sheet) has kept me from becoming proficient on the piano (well, that, and my lazyness).

This is some good coverage of the music theory behind songwriting, which is important in making songs that sound good.

However, there's another part of making music which is not covered at all here, which is the actual engineering of sounds. Think of a sound in your head and recreate it digitallyit'll involve sampling and synthesizing, there's tons of filters and sound manipulation to go through, they all go by different names and have different purposesit's a staggering amount of arcane knowledge.

Where is the learning material on how to do this without experimenting endlessly or looking up everything you see? I want a reverse dictionary of sorts, where I hear a transformation of a sound and I learn what processing it took to get there in a DAW. This would be incredibly useful to learn from.

This seems like the wrong place to start. This seems like the place to start learning a DAW and snapping together samplesto, IMO, make depersonalized unoriginal loop music in a society awash with it because DAW's and looping have created an angel's path to production and proliferation. Learn to drag and drop and you can tell people you meet that you're a musician or a producer. I've met too many mediocre people like this. There should be a disclaimer when this page loads: learn to play an instrument first. Bringing forth music from a physical object utilizes the body as well as the mind, attunes to nuance, and emphasizes that music is primarily a physical phenomenon. It's also just fun and you can jam with or perform for friends. This cut and paste and drag and drop and sample and loop mentality popularized by the rise of hip-hop has lead to an oversaturation of homogeneous, uninspired, unoriginal sound in society. Maybe I'm old fashioned but I think people should spend long, frustrated hours cutting and blistering their fingers for the craft, at least at first. That builds character and will show in your music as you move on.

I'm actually working full time on a new DAW that should make writing music a lot faster and easier. Current DAWs don't really understand music. Also the note input process and experimentation is extremely time consuming and the DAW never helps. Current DAW : my thing = Windows Notepad : IDE. The HN audience is definitely one of my core groups.

I purchased the Ableton Push 2 a month or so ago and it has to be one of the most beautifully engineered pieces of equipment I have ever used. Look up the teardown video. Extremely simple, yet elegant. The Push 1 was created by Akai, and apparently Ableton wasn't satisfied, so they designed and built their own.

I'm an amateur musician and one of the things I hate about electronic music is how "distant" it all feels.

I'm used to picking up the guitar, playing a few chords and writing a melody.

Ableton (or any other DAW) feels like a chore. I have to boot up the computer, connect the MIDI keyboard, the audio interface and the headphones, then wait for Ableton to load, then create a new track and add a MIDI instrument before I can play a single note.

I know the sessions view in Ableton was an attempt to make the music feel more like jamming, but it doesn't really work for me. A lot of musicians who play instruments I've talked to feel the same way.

I would love an "Ableton in a box" that feels more intuitive and immediate.

Love the simplicity, though it does seem to favor EMD (for obvious reasons).

I've always loved the idea of using Live in a live improvisation context, potentially with multiple instruments having their own looping setup; or just a solo thing. It's hard to find that sort of thing, though.

Over the years I like to think Ableton has been at the forefront of the digital music community (at least among the pack like Korg), at a special nexus of hardware, software, VST developers, and global sharing by way of an incredibly robust and deep Live Suite program. Seeing the firm continue to reach out and share community resources is habitual for them, and I'm very pleased to see this get all sorts of attention from this community. The intersection of Technology and Art is a bright, multi-cultural future, and with that comes responsibility. To put it in a phrase, this is an example of Ableton providing a ladder up to new members, rather than slamming the door behind them once a certain level was reached. Enjoy!

To all the people complaining, I feel you. There is not one tool that takes you through the entire workflow of making music well, but they sell software pretending they do support the entire workflow. In truth, you write and arrange in specialized notation software, create samples in specialized synthesis software, or record live audio, then you use audio workstations to fix, edit, transform, and mix. Even there you may rely on external hardware or software plugins. These tools aren't meant for a one-person creator. They mimic the specializations in the music industry. A good all-in-one software simply does not exist, and small teams trying to work on these projects are trying to bite off a real big pie. It's very complex and requires a lot of specialized knowledge, and many of the pieces are probably patent-encumbered, too. But good luck!

The first page of that tutorial reminded me of a product I saw at the Apple store a few weeks ago called Roli. They have a great app [0], but the hardware [1] itself is not ideal but unfortunately necessary to unlock some features... I will be waiting for a v2...

Not that it's important but I'm kinda curious why a. my submission would only get 7 points and b. how it was possible for someone else to submit the same link so soon after and gain the points rather than my submission getting boosted?

It it just random chance/time of day of posting? Or is it because the user who posted this had more points to start with and so was more likely to be "noticed"?

It's designed in a way to make the user (e.g. anyone who likes music) just want to play with it in a way that's very intuitive via its simple, visual layout. And it provides instant feedback that makes you want to continually tinker with it to make something that you like more and more.

This is beautiful and amazing. I love how each step builds on the previous, and uses pop examples to explain theory concepts. I've often wondered so many of the things presented in this, particularly around what common characteristics a genre has with respect to rhythm! Big kudos to the team who built this. I'd love to learn about the development backstory, as this feels a lot like an internal sideproject made by passionate individuals and less like a product idea dreamed up with requirements and specs.

Like any technology there can be lots of different inputs and outputs. I think it is safe to say that Roland and the TR808, 909, 303 changed music notation, and music forever, with their popularization of grid based music programming. It may be that Ableton is doing the same with their software. Each year the tools get better to do these sorts of creative activities. The Beatles recorded Abbey Road on a giant 4 track expensive four track owned by a record label. In 1995 I saved up my money from a summer job and bought a 4-track cassette recorder for about $500. Now you can get a four track app for you mobile phone for about $5. Or download an open source one for free.

I've been using Ableton Live for about a week after getting a free copy with the USB interface I bought (Focusrite Scarlett 2i2, highly recommend) and I had to turn to YouTube to figure out how to actually sequence MIDI drums in it.

I use it pretty much solely for recording, but I take advantage of the MIDI sequencer functions to program in a drum beat instead of recording to a click, because I've found my timing and rhythm is so much better playing to drums than it is just playing to a metronome.

Did this get voted 1023 points (so far) because, it's a great article or does everyone love music? Btw, I use Ableton after my Pro Tools rig was stolen and, I'm buying a new MatrixBrute. I can't wait to checkout this site.

If you want to get an interesting take on the 'Live' part of Ableton Live, look for 'Kid Beyond Ableton' videos. He builds up tracks live on stage by beatboxing all the instruments, and uses something called a Hothand recently as his controller.

Ableton Live is my main daw. I use it every day, generally for hours, and for a wide variety of purposes.

The most depressing thing about ableton is made obvious in two seconds of messing with that tutorial. A complete disregard for music in the sense of pushing boundaries of time, or doing things that are not tied to any sort of grid, and the sense of music as an emotive form.

So many aspects of music are very annoying or borderline impossible to do in ableton. Yet in all these years, and with so many installations, they just never addressed those issues. Instead they vaguely pretend as if music that would require features they don't have is radically experimental. Which might become true if so many people learn music only through using their software.

Seriously, Ableton. Stop pretending making music is clicking on and off in little boxes. It's embarrassing.

--

Edited to take out the "art" part and put in a couple of more specific criticisms.

Am I missing something? I went through all the tutorials and AFAICT there isn't much here. It seemed like "here's a piano. Here's some music made on the piano. Now bang on the piano. Fun yea?"

Is there really any learning here? Did I miss it? I saw the sample songs a few minor things like "major chords are generally considered happy and minor sad" etc... but I didn't feel like going through this I'd actually have learned much about music.

I'm not in anyway against EDM or beat based music. I bought Acid 1.0 through 3.0 back in the 90s which AFAIK was one of the first types of apps to do stuff like this. My only point is I didn't feel like I was getting much learning in order to truly use a piece of software like this. Rather it seemed like a cool flashy page but with a low content ratio. I'm not sure what I was expecting. I guess I'd like some guidance on which notes to actually place where and why, not just empty grids and very vague direction.

Net neutrality is fundamental to free speech. Without net neutrality, big companies could censor your voice and make it harder to speak up online. Net neutrality has been called the First Amendment of the Internet.

Not just harder. Infinitely more dangerous. Probably the scariest implications for NN being gutted are those around loss of anonymity through the Internet. ISPs who are allowed to sell users' browsing history, data packets, personal info with zero legal implications --> that anonymity suddenly comes with a price. And anything that comes with a price can be sold.

A reporter's sources must be able to be anonymous in many instances where release of information about corruption creates political instability, endangers the reporter, endangers the source, endangers the truth from being revealed. These "rollbacks" of regulations make it orders of magnitude easier for any entity in a corporation or organization to track down people who attempt to expose their illegal actions / skirting of laws. Corporations have every incentive to suppress information that hurts their stock price. Corrupt local officials governments have every incentive to suppress individuals who threaten their "job security". Corrupt PACs have every incentive to drown out that one tiny voice that speaks the truth.

A government that endorses suppression cannot promote safety, stability, or prosperity of its people.

It's insane the amount of comments on HN of all places that don't understand that the end of Net Neutrality is the end of the open web. People that didn't get a peek at Compuserve have no idea the fire we're playing with here. The open web is the most significant human achievement since the transistor and we're about to kill it happily.

2. An internet where every consumer assumes everything should be free.

3. An internet where there's only enough room for a handfull of players in each market globally i.e. if you have a "project-management app" there will not be a successfull one for each country much less hundreds for each country.

4. Huge barriers of entry for any new player into many of the markets (no one can even begin competing with google search for less than 20 million).

I think there's still a lot of potential to open up new markets with different policies that would make the internet a much better place for both consumers and entrepreneurs - especially the small guys. I'm just not 100% sure maintaining net-neutrality is the best way to help the little guy and bolster innovation. Anyone have any ideas how we could alleviate some of the above mentioned problems?

EDIT: another question :) If net-neutrality has absolutely nothing to do with the tech monopolies maintaining their power position then why do they all support it? [https://internetassociation.org/]

I've written it before and I'll write it again (despite the massive downvotes from those who want to silence dissent): Title II regulation of the Internet is not the net neutrality panacea that many people think it is.

That is the same kind of heavy-handed regulation that gave us the sorry copper POTS network we are stuck with today. The free market is the solution, and must be defended against those who want European-style top-down national regulation of what has historically been the most free and vibrant area of economic growth the world has ever seen.

The reason the internet grew into what it is today during the 1990s was precisely because it was so free of regulation and governmental control. If the early attempts[1] to regulate the internet had succeeded, HN probably wouldn't exist and none of us would have jobs right now.

They're arguing that Title II Classification is not the same as Net Neutrality, with the following statement:

"Title II is a source of authority to impose enforceable net neutrality rules. Title II is not net neutrality. Getting rid of Title II does not mean that we are repealing net neutrality protections for American consumers.

We want to be very clear: As Brian Roberts, our Chairman and CEO stated, and as Dave Watson, President and CEO of Comcast Cable writes in his blog post today, we have and will continue to support strong, legally enforceable net neutrality protections that ensure a free and Open Internet* for our customers, with consumers able to access any and all the lawful content they want at any time. Our business practices ensure these protections for our customers and will continue to do so."*

So if Title II goes away, where do those strong, legally enforceable net neutrality protections come from? Wasn't that the reasoning behind Title II in the first place, it's the only effectively strong, legally enforceable way of protecting net neutrality (vs other methods with loopholes)?

A few weeks ago on HN, someone made an analogy to water: someone filling their swimming pool should pay more for water than someone showering or cooking with it. This seems to make sense to me, water is a scarce resource and it should be prioritized.

Is the same true of the Internet? I absolutely agree that ISPs that are also in the entertainment business shouldn't be allowed to prioritize their own data, but that seems to me an anti-trust problem, not a net neutrality problem. I also agree that ISPs should be regulated like utilities, but even utilities are allowed to limit service to maintain their infrastructure (see: rolling blackouts).

Perhaps I simply do not understand NN and perhaps organizations haven't done a good job of explaining it, but I don't know that these problems are not best solved by the FTC, not the FCC.

> Net neutrality is fundamental to free speech. Without net neutrality, big companies could censor your voice and make it harder to speak up online.

Big companies are censoring your voice right now! Facebook, Twitter, Youtube and literally every other big provider is censoring online speech all the time. If it's so scary, why nobody cares? If it's not, what Mozilla is trying to say here?

> Net neutrality is fundamental to competition. Without net neutrality, big Internet service providers can choose which services and content load quickly, and which move at a glacial pace.

Internet has been around for a while, and nothing like that happened, even though we didn't have current regulations in place until 2015, e.g. last two years. At which point we start asking for evidence and not just "they might do something evil"? Yes, there were shenanigans, and they were handled, way before 2015 regulations were in place.

> Net neutrality is fundamental to innovation

Again, innovation has been going on for decades without current regulations. What happened that suddenly it started requiring them?

> Net neutrality is fundamental to user choice. Without net neutrality, ISPs could decide youve watched too many cat videos in one day,

ISPs never did it, as far as we know, for all history of ISP existence. Why would they suddenly start now - because they want to get abandoned by users and fined by regulators (which did fine ISPs way before 2015)?

> In 2015, nearly four million people urged the FCC to protect the health of the Internet

Before 2015, the Internet was doing fine for decades. What happened between 2015 and 2017 that now we desperately need this regulation and couldn't survive without it like we did until 2015?

My Internet connection contract already says that they reserve right to: Queue, Prioritize and Throttle traffic. This is used to optimize traffic. - Doesn't sound too neutral to me? It's also clearly stated that some traffic on the network get absolute priority over secondary classes.

Interestingly at one point 100 Mbit/s connection wasn't nearly fast enough to play almost any content from YouTube. - Maybe there's some kind of relation, maybe not.

I think a great thing to do (if you are for net neutrality), is pick specific parts of the NPRM filed with this proceeding and comment directly on it[1] to help do some work for the defense. I feel sorry for anyone who might actually need to address this document point for point to defend net neutrality.

I tried my hand at the general claim of regulatory uncertainty hurting business, then Paragraphs 45 and 47:

-> It is worth noting that by bringing this into the spotlight again the NPRM is guilty of iginiting the same regulatory uncertainty it repeatedly claims has hurt its investments.

-> Paragraph 45 devotes 124 words (94% of the paragraph), gives 3 sources (75% of the references in this paragraph) and a number of figures (100% of explicitly hand-picked data) making the claim Title II regulation has suppressed investment. It then ends with 8 words and 1 reference vaguely stating "Other interested parties have come to different conclusions." Given the NPRM's insistence on both detail and clarity, this is absolutely unacceptable.

-> There are also a number of extremely misleading and insubstantiated arguments throughout. Reference 114 in Paragraph 47, for example, is actually a hapzard mishmash of 3 references with clearly hand-picked data from somewhat disjointed sources and analyses. Then the next two references [115, 116] in the same paragraph, point to letters sent to the FCC over 2 years ago from small ISP providers before regulations were classified as Title II. Despite discussing the fears raised in these letters, the NRPM provides little data on whether these fears were actually borne out. In fact, one of the providers explicitly mentioned in reference 115, Cedar Falls Utilities, have not in any way been subject to these regulations (they have less than 100,000 customers ... in fact the population of Cedar Falls isn't even 1/2 of the 100,000 customer exemption the FCC has provided!). This is obviously faked concern for the small ISP businesses and again, given the NPRM's insistence on both detail and clarity, this is absolutely unacceptable.

It is really a pity that in the US, net neutrality was never established by law, but "just" on institutional level.

Here in the EU, things are much slower and the activists were somewhat envious how fast net neutrality was established in the US, while in the EU this is a really slow legislation process. But now it seems the this slower way is at least more sustainable. We still don't have real net neutrality in the EU, but the achievements we have so far are more durable, and can't be overthrown that quickly.

Many of these articles are missing an easily exploitable position. The key term is "bandwidth" which is the resource at stake. What is being fought over is how to define this "bandwidth" in a way that will be enforceable against the citizen and favorable to the corporation (i.e. "government").

One way they could do this is to divide it like they did the radio spectrum by way of frequency, where frequency is related to "bandwidth". The higher the frequency, the greater the bandwidth. With communication advances, the frequencies can be grouped just like they did with radio, where certain "frequencies" are reserved by the government/military, and others are monopolized by the corporations, and a tiny sliver is provided as a "public" service.

This way would be the most easily enforceable for them to attack NN and the first amendment, as it already exists by form of radio.

* It is already being applied by cable providers through "downstream/upstream" where your participation by "uploading" of your content is viewed inferior to your consumption of it. i.e. Your contribution (or upload) is a tiny fraction of your consumption (or download).

* Also, AWS, Google and other cloud services charge your VPS for "providing" content (egress) and charge you nothing for consuming (ingress). On that scale, the value of what you provide is so miniscule it is almost non-existent to the value of what you consume.

The top comments here seem to misunderstand net neutrality. It's not about companies selling your browsing history---that was recently approved by Congress in a separate bill[1]---but rather is about whether ISPs can prioritize the data of different sites or apps. IIUC net neutrality doesn't really provide any privacy protections, though it's likely good for privacy by making a more competitive market that motivates companies to act more (though not always) in consumers' interests.

Title II "Net Neutrality" is a dangerous power grab -- a solution in search of a problem that doesn't exist, with the potential to become an engine of censorship (requiring ISPs to non-preferentially deliver "legal content" invites the FCC and other regulatory and legislative bodies to define some content as "illegal").

Title II "Net Neutrality" is also an instance of regulatory capture through which large consumers of bandwidth (such as Google and Netflix) hope to externalize the costs of network expansions to accommodate their ever-growing bandwidth demands. To put it differently, instead of building those costs into the prices their customers pay, they want to force Internet users who AREN'T their customers to subsidize their bandwidth demands.

If the internet is fundamental to free speech, maybe it's not a good idea to give it's freedom over to state control, and in particular to an agency who historically has gone beyond it's original mandate and censored content.

When you hand over control to the government, don't ask yourself what it would look like if you were creating the laws, ask yourself what it'll look like when self-interested politicians create them.

I'm not sure putting the internet into the same class of service as a telephone made sense for all the unintended consequences. Everyone is fine until they wind up paying $50/month for their internet and then seeing another $15 in government fees added to their bill. From a pragmatic point of view, I'm sure the government will always have the option to regulate it later on.

It's sad that this article stayed at the first positions for so little time. And we are on HN.

But is this HN folks fault?

At the time of my writing "Kubernetes clusters for the hobbyist" - who thinks it is as important as this one? - with 470 points less, almost 300 comments less, both posted 6/7 hours ago is six positions above.

I wasn't impressed with this article; it reads like fear mongering. More importantly, I don't think the fix is regulation, I think it's better privacy tech + increased competition via elimination of local monopolies. Do we really want to depend on government to enforce privacy on the Internet?

Neutrality does not mean anything should be authorized... international law should allow ISP to submit to judiciary surveillance of individuals if those a suspected of serious crimes, terrorism, pedophilia, black hat hacking, psychological operations/fake news. I don't think because policemen can stop me in the street it is a violation of my freedom.Moreover the article is extremely vague and use argumentum ad populum to push its case while remaining quite unclear on what is really planned: "His goal is clear: to overturn the 2015 order and create an Internet thats more centralized."

So, which is it HackerNews? Are we OK with companies deciding what gets on the internet, or are we not? On one hand, we laud Facebook et al. for suppressing "fake news", and then we get upset when ISPs do the same.

Furthermore, the FCC has historically engaged in content regulation. Anyone wonder why there's no more cartoons on broadcast television? Or perhaps why the FCC is investigating Colbert's Trump Jokes? If we're so concerned about content freedom, the FCC is not the organization to trust.

>The internet is not broken, There is no problem for the government to solve. - FCC Commissioner Ajit Pai

This is sooo true. If internet carriers were preferring some kind of content, or censoring or giving less bandwidth to certain content, or charging for certain content - and this was causing the problems described in the mozilla article - then yes - we could have legislation to solve that problem.

What gets to me about the net neutrality movement is that the legislation they are pushing for is based on vague fears and panic. Caring about net neutrality has become some sort of weird silicon valley techno-virtue signaling.

If ISPs start behaving badly or restricting free speech, I would be happily on board to having legislation to address that. This has not happened and there is no evidence that there is any imminent threat of this happening. Net neutrality legislation is a solution to a vague non-existent speculative problem.

Strategically, this (advertising IPFS as an anti-censorship tool and publishing censored documents on it and blogging about them) doesn't seem like a great idea right now.

Most people aren't running IPFS nodes, and IPFS isn't seen yet as a valuable resource by censors. So they'll probably just block the whole domain, and now people won't know about or download IPFS.

We saw this progression with GitHub in China. They were blocked regularly, perhaps in part for allowing GreatFire to host there, but eventually GitHub's existence became more valuable to China than blocking it was. That was the point at which I think that, if you're GitHub, you can start advertising openly about your role in evading censorship, if you want to.

But doing it here at this time in IPFS's growth just seems like risking that growth in censored countries for no good reason.

Correct me if i'm wrong, but if accessing some content through IPFS makes you a provider for that content doesn't that mean that you are essentially announcing to the world that you accessed the content, which in turn can be used by those who do not want you to access it for targeting you?

In other words, if someone from Turkey (or China or wherever) uses IPFS to bypass censored content, wouldn't it be trivial for the Turkish/Chinese/etc government to make a list with every single person (well, IP) that accessed that content?

Ironically, I've just discovered that https://ipfs.io/ has certificate signed by StartCom, known for being source of fake certificates for prominent domains[1]. So in order to work around censorship, I have to go to site which to establish trust relies on a provider known for providing fake certificates. D'oh.

Some additional information may help in the duty vs prudence debate. It's true that IPFS gateways can be blocked. But as noted, anyone can create gateways, IPFS works in partitioned networks, and content can be shared via sneakernet. Content can also be shared among otherwise partitioned networks by any node that bridges them.

For example, it's easy to create nodes on both the open Internet and the private Tor OnionCat IPv6 /48. That should work for any overlay network. And once nodes on such partitioned networks pin content, outside connections are irrelevant. Burst sharing is also possible. Using MPTCP with OnionCat, one can reach 50 Mbps via Tor.[0,1]

How is Wikipedia censored in Turkey? Are providers threatened to be punished if they resolve DNS queries for wikipedia.org? Or are they threatened to be punished if they transport TCP/IP packets with IPs that belong to Wikipedia?

Wouldn't both be trivial to go around? For DNS, one could simply use a DNS server outside Turkey. For TCP/IP packets, one could set up a $5 proxy on any provider from around the world.

These distributed file systems are really interesting. I'm curious to know if there is anything in the works to also distribute the compute and database engines required to host dynamic content. Something like combining IPFS with Golem (GNT).

I'm not sure this thought makes sense, but just putting it out there for rebuttals and to understand what is really possible:

I assume IPFS networks can be disrupted by a state actor and only thing that a state actor like the US may have some trouble with is strong encryption. I assume it's also possible that quantum computers, if and when they materialize at scale, would defeat classical encryption.

So my point in putting forward these unverified assumptions is to question whether ANY technology can stand in the way of a determined, major-world-power-type state actor. Personally, I have no reason to believe that's realistic, and all these technologies are toys relative to the billions of dollars in funding that the spy agencies receive.

When browsing the content, how does linking work? I mean, don't they kinda have to link to a hash? But how can they know the hash of a page when the links of that page are dependent on the other pages and this may be a circle?

Maybe a very dumb question, but why didn't they build anonymity into it rather than advise users to route it over Tor? My guess is it may have something to do with the Unix philosophy. It's still a great tool regardless.

I'd be really curious to hear more about how Goal 2 (a full read/write wikipedia) could work.

IIRC, writing to the same IPNS address is (or will be?) possible with a private key, so allowing multiple computers to write to files under an IPNS address would require distributing the private key for that address?

Also, I wonder how abuse could be dealt with. I've got to imagine that graffiti and malicious edits would be much more rampant without a central server to ban IPs. It seems like a much easier (near-term) solution would be a write-only central server that publishes to an (read-only) IPNS address, where the load could be distributed over IPFS users.

I really like the fact that the CockroachDB team recently did a detailed Jepsen test with Aphyr. The follow up articles from both CockroachDB and Aphyr explaining the findings are very interesting to read. For those who might be interested -

Pardon the nature of my question, but I'm really interested in what your experience has been so far building a database with Go? Has its runtime (the GC for example) posed any issues for you so far? Looking at other RDBMS's, languages with manual memory management like C or C++ seems to be the go-to choice, so what were the reasons you chose Go?

I'm quite frankly amazed that Go's runtime is able to support a database with such demanding capabilities as CockroachDB!

I think this is the DB Project of the year in the open source community. Cockroachlabs has done an incredible effort to develop and test a new Database and these guys are giving it for free (I read about the series B raise too ;)), for us to use it.

Thanks for doing this. You're very much appreciated.(BTW I love the name and the logo!!)

How does Cockroach efficiently handle the shuffle step when data is on many nodes on the cluster and has to move to be joined? Does Cockroach need high capacity network links to function well?

I always see companies making the claim of linear speedup with more nodes but surely that can't be the case if the nodes are geographically disjointed over anything less than gigabit links? Perhaps linear speedup with more nodes is only possible over high speed connections? How high is that exactly?

Congratulations to the team on the release! Introducing this kind of database is no easy task - thank you and great job, keep up the good work!

It seems like many new databases tend to suffer from providing scale out but relatively poor per node performance so that a mid-size cluster still performs worse than a single node solution based on a traditional SQL database.

And if you genuinely need huge insert volumes, because of the per node performance you'd need an enormous cluster whereas Cassandra would deal with it quite comfortably.

CockroachDB looks like a great alternative to PostgreSQL, congrats to the team for doing so much in such a short time. The wire protocol is compatible with Postgres, which allows re-using battle-tested Postgres clients. However it's a non-starter for my use case since it lacks array columns, which Postgres supports [0]. I also make use of fairly recent SQL features introduced in Postgres 9.4, but I'm not sure if there are major issues with compatibility.

I've had a question for quite some time though (and I think there is an RFC for it on GitHub): do we still need to have a "seed node" that is run without the --join parameter, or can we run all the nodes with the same command line, with the cluster waiting for quorum to reconcile on its own?

About nine months ago we made the decision to go with RethinkDB for our infrastructure in place of PostgreSQL (at least for live replicated data), but if this existed at the time we'd have seriously taken a look. We're pretty happy with RethinkDB but I plan on still taking a look at this so we have a backup option.

Very disappointed with HN turning into a 4chan/reddit style trolling board about the name. Guys, we get it that you don't like the name. Can we please stop bike shedding and move on? The people at cockroachdb have obviously seen all your messages but decided it's worth keeping the name. What more is there to talk about? Why not talk about the relative technical merits of this DB?

Since there's a little side riff about the name going on I thought I'd throw in my 2 cents. Personally I love the name. I think it does a great job of conveying the spirit of the project and provides unlimited pun opportunities. Plus it's memorable, just like a real life roach encounter. Unfortunately I'm sure some people will discriminate against your DB on the basis of name alone. That's ludicrous, but that's our species for ya.

I think the name "Cockroach" was a really poor decision from a marketing standpoint. The team intended to convey durability, since cockroaches can live through anything. But when I think of a cockroach, I think, gross, disgusting, etc.

First, this is awesome! Congrats to the team for reaching this milestone.

Secondly, I think the name is memorable and conveys exactly what it should. If I were ever on an engineering team that chose not to use CockroachDB due to being "grossed out" by the name, I wouldn't be on that engineering team for long. Perhaps someone can explain the knee-jerk reaction to it for me.

I respect Brendan, and although it is an interesting article, I have to disagree with him: The OS tells you about OS CPU utilization, not CPU micro-architecture functional unit utilization. So if the OS uses a CPU for running code until a physical interrupt or a software trap happens, in that period the CPU has been doing work. Unless the CPU could be able to do a "free" context switch to a cached area not having to wait for e.g. a cache miss (hint: SMT/"hyperthreading" was invented exactly for that use case), the CPU would be actually busy.

If in the future (TM) using CPU performance counters for every process becomes really "free" (as in "gratis" or "cheap"), the OS could report bad performing processes because the reasons exposed in the article (low IPC indicating poor memory access patterns, unoptimized code, code using too small buffers for I/O -causing system performance degradation because excessive kernel processing time because-, etc.), showing the user that despite having high CPU usage, the CPU is not getting enough work done (in that sense I could agree with the article).

The problem is that IPC is also a crude metric. Even leaving aside fundamental algorithmic differences, an implementation of some algorithm with IPC of 0.5 is not necessarily faster than an implementation that somehow manages to hit every execution port and deliver an IPC of 4.

I can improve IPC of almost any algorithm (assuming it is not already very high) by slipping lots of useless or nearly useless cheap integer operations into the code.

People always tell you "branch misses are bad" and "cache misses are bad". You should always ask: "compared to what"? If it was going to take you 20 cycles worth of frenzied, 4 instructions per clock, work to calculate something you could keep in a big table in L2 (assuming that you aren't contending for it) you might be better off eating the cache miss.

Similarly you could "improve" your IPC by avoiding branch misses (assuming no side effects) by calculating both sides of a unpredictable branch and using CMOV. This will save you branch misses and increase your IPC, but it may not improve the speed of your code (if the cost of the work is bigger than the cost of the branch misses).

IPC is amazing. We had some "slow" code, did a little profiling, and found that a hash lookup function was showing very low IPC about half the time. Turns out, the hash table was mapped across two memory domains on the server (NUMA) and the memory lookup from one processor the other processors memory was significantly slower.

perf on a binary that is properly instrumented (so it can show you per-source-line or per-instruction data) is really ghreat.

I use `htop` for all of my Linux machines. It's great software. But one of my biggest gripes is that "Detailed CPU Time" (F2 -> Display options -> Detailed CPU time) is not enabled by default.

Enabling it allows you to see a clearer picture of not just stalls but also CPU steal from "noisy neighbors" -- guests also assigned to the same host.

I've seen CPU steal cause kernel warnings of "soft-lockups". I've also seen zombie processes occur. I suspect they're related but it's only anecdotal: I'm not sure how to investigate.

It's pretty amazing what kind of patterns you can identify when you've got stuff like that running. Machine seems to be non-responsive? Open up htop, see lots of grey... okay so since all data is on the network, that means that it's a data bottleneck; over the network means it could be bottlenecked at network bandwidth or the back-end SAN could be bottlenecked.

Fun fact: Windows Server doesn't like not having its disk IO not be serviced for minutes at a time. That's not a fun way to have another team come over and get angry because you're bluescreening their production boxes.

Perf is fascinating to dive into. If you are using C and gcc you can use record/report that show you line by line and instruction by instruction where you are getting slowdowns.

One of my favorite school assignments was we were given an intentionally bad implementation of the Game of Life compiled with -O3 and trying to get it to run faster without changing compiler flags. It's sort of mind boggling how fast computers can do stuff if you can reduce the problem to fixed stride for loops over arrays that can be fully pipelined.

At Tera, we were able to issue 1 instruction/cycle/CPU. The hardware could measure the number of missed opportunities (we called them phantoms) over a period of time, so we could report percent utilization accurately. Indeed, we could graph it over time and map periods of high/low utilization back to points in the code (typically parallel/serial loops), with notes about what the compiler thought was going on. It was a pretty useful arrangement.

The article is interesting, but IPC is the wrong metric to focus on. Frankly, the only thing we should care about when it comes to performance is time to finish a task. It doesn't matter if it takes more instructions to compute something, as long as it's done faster.

The other metric you can mix with execution time is energy efficiency. That's about it. IPC is not a very good proxy. Fun to look at, but likely to be highly misleading.

It seems to me that the CPU utlization metric (from /proc/stat) has far more problems than misreporting memory stalls.

As far as I understand it, the metric works as follows: At every clock interrupt (every 4ms on my machine) the system checks which process is currently running, before invoking the scheduler:- If the idle process idle time is accounted.- Otherwise the processer is regarded as utilized.

(This is what I got from reading the docs, and digging into the source code. I am not 100% confident I understand this completely at this point. If you know better please tell me!)

There are many problems with this approach:Every time slice (4ms) is accounte either as completely utilized on completely free. There are many reasons for processes going on CPU or off CPU outside of clock interrupts. Blocking syscalls are the most obvious one.In the end a time slice might be utilized by multiple different processes and interrupt handlers but if at the very end of the time slice the idle thread is scheduled on CPU the whole slice is counted as idle time!

Now I wonder how easy and manual work it would be to do these combined flamegraphs with CPI/IPC information? My cursory search didn't find nary a mention after 2015... Perhaps this is still hard and complicated.

To me it seems really useful to know why a function takes so long to work (waiting or calculating) and not "merely" how long it takes. Even if the information is not perfectly reliable nor can't be measured without effect on execution.

CPU frequency scaling can also lead to somewhat unintuitive results. On few occasions I've seen CPU load % increasing significantly after code was optimized. Optimization was still actually valid, and the actual executed instructions per work item went down, but the CPU load % went up since OS decided to clock down the CPU due to reduced workload.

Interestingly IPCs are also used to verify new chipsets in embedded companies. Run the same code with newer generation chipset and see if IPC is better than the previous. IPCs are one of the main criteria if the new chipset is a hit or miss (others are power..)

> You can figure out what %CPU really means by using additional metrics, including instructions per cycle (IPC)

Correct me if I am wrong, but this won't work for spinlocks in busy loops: you do have a lot of instructions being executed, but the whole point of the loop is to wait for the cache to synchronize, and as such, this should be taken as "stalled".

Look, CPU utilization is misleading. Did you forget to use -O2 when compiling your code? Oops, CPU utilization is now including all sorts of wasteful instructions that don't make forward progress, including pointless moves of dead data into registers.

Are you using Python or Perl? CPU utilization is misleading; it's counting all that time spent on bookkeeping code in the interpreter, not actually performing your logic.

CPU utilization also measures all that waste when nothing is happening, when arguments are being prepared for a library function. Your program has already stalled, but the library function hasn't started executing yet for the silly reason that the arguments aren't ready because the CPU is fumbling around with them.

I didn't know about tiptop, and it sounds interesting. Running it, though, it only shows "?" in Ncycle, Minstr, IPC, %MISS, %BMIS and %BUS colums for a lot of processes, including for, but not limited to, Firefox.

This is silly. The conceit that ipc is a simplification for "higher is better" is exactly the problem he has with utilization.

True, but useful? Most of us are busy trying to get writes across a networked service. Indeed, getting to 50% utilization is often a dangerous place.

For reference, running your car by focusing on rpm of the engine is silly. But, it is a very good proxy and even more silly to try and avoid it. Only if you are seriously instrumented is this a valid path. And getting that instrumented is not cheap or free.

The context of what you are measuring depends if this is useful work or not. The initial access of a buffer almost universally stalls (unless you prefetched 100+ instructions ago). But starting to stream this data into L1 is useful work.

Aiming for 100%+ IPC is _beyond_ difficult even for simple algorithms and critical hot path functions. You not only require assembler cooperation (to assure decoder alignment), but you need to know _what_ processor you are running on to know the constraints of its decoder, uOP cache, and uOP cache alignment.

I think thinking about the CPU add mainly the ALU seems myopic.The job of the CPU is to get data into the right pipeline at the right time. Waiting for a cache miss means it's busy doing its job. Thus, CPU busy is a reasonable metric the way it is currently defines and measured. (After all, the memory controller is part of the CPU these days.)

Totally disagree with the premise of the article. Every metric tool that i know of that shows cpu utilization correctly shows cpu work. Load on the other hand represents cpu and iowait (overall system throughput). Io wait is also exposed in top as the "wait" metric. An amazon EC2 box can very easily get to load(5) = 10 (anything above 1 is considered bad), but the cpu utilization metric will still show almost no cpu util.

The core waiting for data to be loaded from RAM is busy. Busy waiting for data.

Instructions per cycle can also be misleading. Modern cpu's can do multiple shifts per cycle, but something like division takes a long time.

It all doesn't matter anyway, as instructions per cycle does not tell you anything specific. Use the cpu-builtin performance counters, use perf. It basically works by sampling every once in a while. It (perf, or any other tool that uses performance counters) shows you exactly what instructions are taking up your processes time. (hint: it's usually the ones that read data from memory; so be nice to your caches)

People are thinking way too much about how much this saves them at a personal level.

I think people should instead be thinking about how we can save the existence of the entire species, and all other higher order forms of life on earth, rather than focusing on their individual tax breaks, savings, or other trivial concerns. Yes, your cash flow is rendered quite trivial if life on Earth ends.

Invest in the Life Economy, and turn your back on the Death Economy. The value here is in the benefit to life, concern over state monopolized currencies clearly facilitates an economy of death.

Yikes. I just signed a contract for a new roof here last month, it's going to cost about $12k. Just did the estimate for the Tesla Solar roof... $80,300, so $87k if I want the battery too. I can barely afford the $12 right now, the $80 is just so far over it's not even close, even with how much I save over the years in electricity.

That being said, I love these things, so hoping it gets cheaper in the coming years.

They are going after the portion of the market that would replace their roof with a high end material, and are interested in solar.

If you are a home owner in this situation, you could consider investing into your home. The roof will pay dividends over the next 30 years, and is attractive and durable.

I think it will do extremely well. Perhaps the best opportunity is in new construction. Imagine having 50k more baked into your mortgage, but having your roof lower your ongoing energy costs! Great potential in that market, could also optimize the roof designs for power generation.

I have always been a huge fan of a quick transition to sustainable energy sources. There is just one little thing I don't understand.

Why they expect people to make electricity at their homes? You can buy a little piece of land in a dessert, put solar panels there and distribute the electricity to other places. And you don't have to climb on any roof during the installation or the maintenance.

It is not profitable today in a free market to bake your own bread or to plant your own vegetables. Because if it is done in a large scale by professionals, it can be made much cheaper while keeping the good quality. So I don't understand, how the home-made electricity could economically compete with the professional energy farms of the future.

At the event, Musk said Tesla's roof would price competitively with normal roofs and could even cost less.

"It's looking quite promising that a solar roof will actually cost less than a normal roof before you even take the value of electricity into account," Musk said at the event. "So the basic proposition would be: Would you like a roof that looks better than a normal roof, lasts twice as long, costs less, and, by the way, generates electricity? It's like, why would you get anything else?"

For a counterweight let me present this interview[0] with the CEO of "the largest privately held solar contracting company in America", near the end of which he says several disparaging things about Tesla's roof, including,

> When I saw the demo he did at Universal Studios... What I saw was a piece of glass that looked like it had a cell in it. The challenges hes going to have is, how are you going to wire it? Every one of those shingles has to be wired.

> Roofs have valleys and they have hips and they have pipes. How are you going to work around that? How are you going to cut that glass? Are you going to cut right through the cell?

The latter question is perhaps answered by the posted article, "Solar Roof uses two types of tilessolar and non-solar." So Petersen's question is moot, the glass/solar tiles don't have to be cut to fit in a hip or around a flue, that will be done to the non-solar tiles that look the same.

The question of wiring is open: imagine the grid of wires that have to underly that roof, and getting them all put down without a break or a short, by big guys with nail guns (if you've ever watched roofers at work -- it isn't a precision operation).

Then Petersen goes on to say,

> So I would say for the record ... itll be cost-prohibitive. ... For $55,000 I can give you a brand-new roof that will last forever 50 years and I can give you all the solar you can handle. ... (Musks) product is going to be north of $100,000.

The graph in the posted article does not directly address total up-front installed cost, but rather tries to combine cost with some anticipated lifetime energy return -- a procedure with a LOT of variables and assumptions. I would like to see real numbers for a Tesla roof, $/sq.yd installed.

I think Elon got ripped off on his last shingle roof. The bar chart is nice but off by at least 150%. I've had many roofing subcontractors as clients past and present in Northern California. Based on an average of 870 roofs in 2016 for Single Family Residential homes in the bay area, Asphalt shingle roofs are $3.12 per square foot for materials and labor. The highest was $5.75 psf and the lowest $2.35 psf. Note that the SF bay area is considered one of the most expensive in roofing market. Also note that Solar City has a poor reputation in the industry for hard selling larger than needed residential solar systems.

I need to replace my roof this year / next year. Cost ~15-20k for normal roofing, up to 50k for metal. I want solar on top of that and backup power. Just put my money down for this. Cost is under the 50k I was thinking about just for the metal roof!

Time will tell when they come out and do the survey to see how correct it is but I am excited.

It seems to me that this is critical. If connections fail in a really hostile environment (high thermal range and moisture levels) then maintenance will kill any savings.

But if they've solved this problem, (and perhaps have an efficient way to replace tiles without removing the ones above), then I'd guess they will be wildly successful.

I once visited my brother who was having a new slate roof installed. While inspecting it, he saw a cracked slate on the bottom row. He insisted it be replaced, which meant removing an ever-increasing triangle of tiles above it, until you reached the ridge. The contractor did not have a good day.

I understand where these people who are saying it sucks and it's too expensive are coming from. It is more expensive than normal solar panels.

BUT! How many wealthy people have beautiful houses that don't have solar panels? Why do you think that is?

Tesla has this cool factor that didn't exist for environmentally friendly things before. How many super rich people drove electric cars or hybrids before? Now Teslas are one of the cool things to have.

They are absolutely targeting a different segment of the population, but I think overall it's a very positive thing and it'll probably work.

I'm curious about the durability - I live Colorado Springs, which is typically very sunny (good for solar), but can get pretty bad hailstorms. This means that the average roof lifetime here is much shorter than elsewhere. If the Tesla's roof tiles are actually significantly more durable than asphalt, it could be more cost-effective here than elsewhere.

At my house, it would take more then 20 years to use up $53,500 worth of electricity assuming that the panels would be able to generate all the electricity that I need (and it probably would not be able to because my roof is not in the perfect angle). I probably will have to stick to a conventional roof.

I'm so absolutely excited for solar power. Tesla's Solar Roof, their PowerWall batteries, electric cars. It's all just painting such a bright future. Certainly Tesla has no monopoly on it, but they've made it sexy and are pushing the bleeding edge forward. Props to them.

We recently signed a contract to do an installation on our house (with a local contractor, not SolarCity). It can't happen soon enough! We'll have enough panels and batteries to be 100% off-grid throughout the entire year, plus get a good chunk of change back from the Net Metering every year. Pay off is only 8 years!

That installation is enough to cover our normal electric usage. Longer term I want to replace our gas appliances with electric and replace the car with a Tesla. Then we can double our solar installation to keep pace and BAM we will be 100% clean energy and off-grid. All while saving a bucket of money.

The thought of running off grid in the middle of a Southern California suburb? People might think me crazy, but guess what? At least we're doing our small part to save the planet, and saving money doing it. So who's the crazy one?

I'm super excited about all of the great stuff happening in solar recently, but whenever I read about the economics of home solar, I'm also always reminded of how stacked the deck is for wealthy people vs. poor folks. There's a very large federal tax credit for solar investment. That's great...but, people who can't afford their own home get no such credit, and there's no way for them to get such a credit. That's a super common trait for lot of incentives; they go to people who need them least. And, the people who are getting these incentives, are also using a lot more power (bigger houses, more power), and so even with solar, their huge houses may still be contributing more to emissions than the poor folks who aren't getting any tax breaks living in apartments or rental properties.

I don't really have any answers on this, I just think it doesn't get talked about enough.

"Solar Roof uses two types of tilessolar and non-solar. Looking at the roof from street level, the tiles look the same. Customers can select how many solar tiles they need based on their homes electricity consumption."

Game changer for suburban housing. This will accelerate the decentralisation of power generation making it less likely power failure will occur. Now for housing regulations at state and municipal level to mandate solar tiles in construction.

To all upset about pricing there are products targeting different income brackets. People in blah also can't understand how we spend half of their monthly income on some organic blah drink. Just because it does not make sense for your particular situation does not mean there is no market.

I see a lot of these personal level-vs-global level discussions here but ultimately not enough posts celebrating the fact that we're looking at both, here, and looking at a potentially much better future because of it.

Sit down and dig this: in my country the State owns sunshine. Yep. They even made sure it was included in the last Constitution. So, if this tech ever becomes cheap enough for the masses, government will be ready to tax it.

The hail ball test is deceptive. The tesla tile is held with more support since its horizontal. the max distance to any corner support is maybe 2-3 inches. the other natural tiles are vertical, and therefore have 4-5 inches to the farthest supported corners. It may still work, but we cant tell from that video.

FYI LCoE calculation on Si PV assume a reduction of at least 50% of generating capacity within 20 years. This page just claims "30 years" which is outside the expected lifetime of any cells on the market today.

People keep overlooking the objective value of not relying on "grid" power sources. Power goes off, your system keeps going. Gasoline supply stops (I've seen that a few times), you can just power your car at home. Your system fails, grid is likely still up to cover.

Supply-and-demand takes a sharp turn when supply is actually limited and can/does run out. At that point, having pre-paid for your own uninterrupted off-grid supply is worth a whole lot more.

Sounds like something you'd hear from protesters mocking the 1% but no just another day here on HN.

I've been here (in various incarnations) long enough to say this, so could we try to be just a little bit more self aware? 24k/yr is nearly double the minimum wage. A quarter million dollars is a truly immense amount of money. And buying two cars is a dream for most of America, ignoring the fact that those are two Teslas, which are roughly $70k cars (and no, you can't currently buy any $35k Teslas no matter what Musk's Twitter says.)

I'm surprised they didn't team up more with Google's Project Sunroof or Zillow or create their own version of those projects, so that you could just put in your home address and get all the relevant details. Had to check Zillow to find out my own square footage.

I am currently waiting to have my roof redone in the next couple of weeks It's going to cost $20k. I went through the calculator and it said that my roof would be about $30k after rebates, with no battery. That, to be honest, is something I wish I had known before I signed the contract to get my roof done. I don't, however, use much electricity. I use about $70/month max for my entire house, so I would literally have to convert everything over to electricity in order for this to be more worthwhile. But at this point, there's no incentive for me to ever get the solar roof unfortunately, having JUST dumped $20k into my shingle roof.

This is the dumbest thing ever. If you live in a city or a suburb, you don't need one of these things because you'll be connected to a grid that can give you electricity that is far more efficiently generated. If you live in a rural area with a lot of sun, then you can just put solar panels on the ground where they're not a bitch to clean.

I'm not against using solar electricity because it can be made affordable but this idea is equivalent to the backyard blast furnaces in Maoist China. It's a waste of time and only useful for status signaling to your eco-chic friends.

NScript is the component of mpengine that evaluates any filesystem or network activity that looks like JavaScript. To be clear, this is an unsandboxed and highly privileged JavaScript interpreter that is used to evaluate untrusted code, by default on all modern Windows systems. This is as surprising as it sounds.

Double You Tee Eff.

Why would mpengine ever want to evaluate javascript code coming over the network or file system? Even in a sandboxed environment?

What could they protect against by evaluating the code instead of just trying to lexically scan/parse it?

SourceTree is pretty much unusable on my laptop, because every time it does anything the antimalware service springs into life and uses up anything from 20%-80% of the CPU power available. I've had it take 30 seconds to revert 1 line. It's stupid.

I was very much prepared to blame Atlassian for this, but maybe I need to start thinking about blaming Microsoft instead, because it sounds like they've made a few bad decisions here.

(Still, if my options are this, or POSIX, I'll take this, thanks. Dear Antimalware Service Executable, please, take all of my CPUs; whatever SourceTree is doing, I can surely wait. Also, please feel free to continue to run fucking Javascript as administrator... I don't mind. It's a small price to pay if it means I don't have to think about EINTR or CLOEXEC.)

The attached proof of concept demonstrates this, but please be aware that downloading it will immediately crash MsMpEng in its default configuration and possibly destabilize your system. Extra care should be taken sharing this report with other Windows users via Exchange, or web services based on IIS, and so on.

And I think the intended formulation was "care should be taken sharing this report with other Windows users or via Exchange, or web services based on IIS..." (because they're afraid it could crash the servers even if sharing between non-Windows users!)

Customers should verify that the latest version of the Microsoft Malware Protection Engine and definition updates are being actively downloaded and installed for their Microsoft antimalware products.

For more information on how to verify the version number for the Microsoft Malware Protection Engine that your software is currently using, see the section, "Verifying Update Installation", in Microsoft Knowledge Base Article 2510781.

For affected software, verify that the Microsoft Malware Protection Engine version is 1.1.10701.0 or later."

Quick question on the timings of this. The report says that "This bug is subject to a 90 day disclosure deadline." - does that mean it was discovered 90 days ago and has been published now, or it was discovered on May 6 (as dates on the comments seem to suggest) and Microsoft has responded very quickly? In either case it seems strange not to have waited a couple more days because (for my system, anyway) I was still running the vulnerable version even after the report was made public.

Does MsMpEng actually do file analysis itself, unpacking, unarchiving, &c? That's the kind of stuff that should usually be sandboxed. If its zip/rar/7zip/cab/whatever support hasn't been formally verified and those components run as SYSTEM, es no bueo.

Anyone with ideas on how they came to this conclusion? Yes, I read the linked document but felt that the index assessment didn't really reflect that google (Natalie?) seems to have found this "in the wild".

if you read this, could you tell to Microsoft for fix the issue with definition updates that won't remove after update? The definition kept growing and waste space. (the problem auto solve if the computer is rebooted).

I find the naming "Visual Studio for Mac" pretty deceptive, since apparently it is not anything like the win32 VS environment, but instead based on Xamarin Studio. Even the tagline is deceptive: "The IDE you love, now on the Mac".

I would guess this won't let you build/debug win32 or winforms or wpf applications, or install any .vsix extensions from the visual studio marketplace (of which there are lots of useful ones, such as this one to manage translations - https://marketplace.visualstudio.com/items?itemName=TomEngle... ) - correct me if I'm wrong, but if I can't install my .vsix extensions, this is not "the IDE you love, now on the Mac".

Since there's a PM here from Microsoft, I've got a couple questions regarding the requirement to "sign in with your Microsoft account":

With all your branding changes over the years, what's considered a Microsoft account today? My old Hotmail account, that existed from the days before Microsoft bought Hotmail? I think it's still alive, but I haven't logged in in the better part of a decade to find out. The accounts created over the years for various Xbox machines? I think those are still around, but I doubt I could get into them at this point. The "Live" account I had to create for MSDN many years ago? Once that job and associated need for MSDN ended I've not logged in to see if it's still around.

Which one(s) should I try to find login information for to use?

Furthermore, why must I sign in in the first place for the free version? I can understand signing in to associate the install with a paid version with extra features, but I see no reason to require it for free versions without any paid features.

I really wish Microsoft had made UWP cross-platform. Would be pretty amazing if I could use UWP/C# to target Windows, Linux, macOS, iOS and Android properly. With UWP being limited to just Windows I don't see it ever being a success.

I used to be a big VB, VC++ fan boy a long time ago. 1995 :-) Have since move on....

Tried built a few opensource apps with VS once a year for the past few years and found that I can't even compile a single Windows open source packages from github, sourceforge after weeks of trying.

The code might claim to be able to build with VS10, VS12. The dependency libraries will need completely different VS version of .xml, .proj, .sln build systems.

I challenge the PM of VS product try to build a few popular MS projs such as python, VLC, or anything in http://opensourcewindows.org/. Document the process of building the app and dependence library. Compare that to the process of try to build that same packages in Mac (with brew) or in Linux.

In Linux, for all the packages I like play with. "./configure && make" handle most of the the build in a few minutes. Even easier on Ubuntu with apt-get source/build commands. Very similar process in Mac.

Even linux kernel, I can build it easily with pretty much the same 1-2 commands for the past 20 years.

Is this more than just Xamarin? I'm sorry -- I tried last time and that was the impression I got. I know it says it has asp.net core but can I truly build .net web services based apps now without parallels?

It would be really nice to have a microsoft rep in here to answer questions. Because what I really want is visual studio that can build C++ win32 MFC executables without having to run Windows in a virtual machine. Can it do that? I don't know.

I sincerely would LOVE to have an F# development IDE that didn't ask me to install Mono. I don't have anything against Mono, per se, I just want to see that Microsoft officially supports it across the three major platforms.

Is anybody doing professional development on .NET using VS for Mac? All this time i thought it was just Xamarin tools, but it looks like it actually has .NET Core project templates too. This has been the only thing that kept me away from Macs as a .NET dev.

Any .NET MVC developers here? I always wanted to learn ASP MVC, but never did because I was scared of the deployment situation on Linux. Has anything changed in that regard? Would you say deploying a .NET web app works almost as smooth on Linux as let's say a node.js app?

Coincidentally I was just using this & Xamarin Studio on mac today. I didn't realize VS Mac had released, I already had the beta.

So far I don't like it as much! Not sure what features are here I actually care about as I'm just using Mono. The pads no longer make sense in VS for Mac. I just have debug pads open all the time. I can't really tell when I've stopped debugging. There's weird buttons on the pads that do nothing. Not sure why all the clutter is here, Xamarin Studio had this stuff figured out.

More good news from the MS / Xamarin camp. A few years ago I 'bet the farm' on using Xamarin for Mac to develop a Mac version of our PC application (with shared code in a PCL); since that time Xamarin (and then MS/Xamarin after the buyout) have rarely failed to impress. Kudos to the team.

I've been waiting for this for a while. Only trouble so far is that the installer comes up in the wrong locale for me (it ignores the language ordering in Preferences and displays the installer in my secondary/input language, not English, unlike fully native apps).

Does Visual Studio for Mac have the same functionality as Visual Studio for Windows? If not then they should really stop confusing customers by rebranding a product that had nothing to do with Visual Studio for Windows.

I've had to work on mission critical projects with 100% code coverage (or people striving for it). The real tragedy isn't mentioned though - even if you do all the work, and cover every line in a test, unless you cover 100% of your underlying dependencies, and cover all your inputs, you're still not covering all the cases.

Just because you ran a function or ran a line doesn't mean it will work for the range of inputs you are allowing. If your function that you are running coverage on calls into the OS or a dependency, you also have to be ready for whatever that might return.

Therefore you can't tell if your code is right just by having run it. Worse, you might be lulled into a false sense of security by saying it works because that line is "covered by testing".

The real answer is to be smart, pick the right kind of testing at the right level to get the most bang for your buck. Unit test your complex logic. Stress test your locking, threading, perf, and io. Integration test your services.

There are a few relevant facts that should be known to everyone (including managers) involved in software development, but which probably are not:

1) 100% path coverage is not even close to exhaustively checking the full set of states and state transitions of any usefully large program.

2) If, furthermore, you have concurrency, the possible interleavings of thread execution blow up the already-huge number of cases from 1) to the point where the latter look tiny in comparison.

3) From 1) and 2), it is completely infeasible to exhaustively test a system of any significant size.

The corollary of 3) is that you cannot avoid being selective about what you test for, so the question becomes, do you want that decision to be an informed one, or will you allow it to be decided by default, as a consequence of your choice to aim for a specific percentage of path coverage?

For example, there are likely to many things that could be unit-tested for, but which could be ruled out as possibilities by tests at a higher level of abstraction. In that case, time spent on the unit tests could probably be better spent elsewhere, especially if (as with some examples from the article) a bug is not likely.

100% path coverage is one of those measures that are superficially attractive for their apparent objectivity and relative ease of measuring, but which don't actually tell you as much as they seem to. Additionally, in this case, the 100% part could be mistaken for a meaningful guarantee of something worthwhile.

Instead of writing clean code that makes sense and is easy to reason about, he will write long-winded, poorly abstracted, weird code that is prone to breaking without an extensive "test suite" to hold the madness together and god forbid raise an alert when some unexpected file over here breaks a function over there.

Tests will be poorly written, pointless, and give an overall false sense of security to the next sap who breaths a sigh of relief when "nothing is broken". Of course, that house of cards will come down the first time something is in fact broken.

I've worked in plenty of those environments, where there was a test suite, but it couldn't be trusted. In fact, more often than not that is the case. The developers are a constant slave to it, patching it up; keeping it all lubed up. It's like the salt and pepper on a shit cake.

Testing what you do and developing ways to ensure its reliable, fault-tolerant and maintainable should be part of your ethos as a software developer.

But being pedantic about unit tests, chasing after pointless numbers and being obsessed with a certain kind of code is the hallmark of a fool.

The tragedy of 100% code coverage is that it's a poor ROI. One of things that stuck with me going on twenty years later is something from an IBM study that said 70% is where the biggest bang-for-the-buck is. Now maybe you might convince me that something like Ruby needs 100% coverage, and I'd agree with you since some typing errors (for example) are only going to come up at runtime. But a compiled (for some definition of "compiled") language? Meh, you don't need to check every use of a variable at runtime to make sure the data types didn't go haywire.

The real Real Tragedy of 100% coverage is the number of shops who think they're done testing when they hit 100%. I've heard words to that effect out of the mouth of a test manager at Microsoft, as one example. No, code coverage is a metric, not the metric. Code coverage doesn't catch the bugs caused by the code you didn't write but should have, for example. Merely executing code is a simplistic test at best.

Throughout my career I find tests that tests the very lowest implementation detail, like private helper methods, and even though a project can achieve 100% coverage it still is no help avoiding bugs or regression.

Given a micro service architecture I now advocate treating each service as a black box and focus on writing tests for the boundaries of that box.

That way tests actually assist with refactoring rather than be something that just exactly follows the code and breaks whenever a minor internal detail changes.

However occasionally I do find it helpful map out all input/output for an internal function to cover all edge cases. But that's an exception.

I agree (mostly) with the authors standpoints, but his arguments to get there are not convincing:

> You don't need to test that. [...] The code is obvious. There are no conditionals, no loops, no transformations, nothing. The code is just a little bit of plain old glue code.

The code invokes a user-passed callback to register another callback and specifies some internal logic if that callback is invoked. I personally don't find that obvious at all.

Others may find it obvious. That's why I think, if you start with the notion "this is necessary to test, that isn't", you need to define some objective criteria when things should be tested. Relying on your own gut feeling (or expecting that everyone else magically has the same gut feeling) is not a good strategy.

If I rewrite some java code from vanilla loops-with-conditionals into a stream/filter/map/collect chain, that might make it more obvious, but it wouldn't suddenly remove the need to test it, would it?

>"But without a test, anybody can come, make a change and break the code!"

>"Look, if that imaginary evil/clueless developer comes and breaks that simple code, what do you think he will do if a related unit test breaks? He will just delete it."

You could make that argument against any kind of automated test. So should we get rid of all kinds of testing?

Besides, the argument doesn't even make sense. No one is using tests as a security feature against "evil" developers (I hope). (One of) the points of tests is to be a safeguard for anyone (including yourself) who might change the code in the future and might not be aware of all the implications of that change. In that scenario, it's very likely you change the code but will have a good look at the failed test before deciding what to do.

The article illustrates what happens when you have inexperienced or poor developers following a management guideline.

To see how 100% coverage testing can lead to great results, have a look at the SQLite project [1].

In my experience, getting to 100% takes a bit of effort. But once you get there it has the advantage that you have a big incentive to keep it there. There is no way to rationalise that a new function doesn't need testing, because that would mess up the coverage. Going from 85% to 84% coverage is much easier to rationalise.

And of course 100% coverage doesn't mean that there are no bugs, but x% coverage means that 100-x% of the code is not even run by the tests. Do you really want your users to be the first ones to execute the code?

As an anecdote, in one project where I set the goal of 100% coverage, there was a bug in literally the last uncovered statement before getting to 100%.

We've almost stopped unit testing. We still test functionality automatically before releasing anything into production, but we're not doing a unit test in most cases

Our productivity is way up and our failure rates haven't changed. It's increased our time spent debugging, but not by as much as we had estimated that it would.

I won't pretend that's a good decision for everyone. But I do think people take test-driven-development a little too religiously and often forget to ask themselves why they are writing a certain unit test.

I mean, before I was a manager I was a developer and I also went to a university where a professor once told me I had to unit test everything. But then, another professor told me to always use the singleton pattern. These days I view both statements as equally false.

I think a bigger epidemic is we're putting too much emphasis on "do this" and "do that" and "if you don't do this then you're a terrible programmer". While that sometimes may be true, much more importantly is to have competent, properly trained professionals, who can reason and think critically about what they're doing, and who have a few years of experience doing this under their belt. Just like other skilled trades, there's a certain kind of knowledge that you can't just explain or distill into a set of rules, you have to just know it. And I see that in the first example in this article, where the junior programmer is writing terrible tests because he just doesn't know why they're bad tests (yet).

I might be completely wrong on this one, but it seems to me that a lot of the precepts of TDD and full code coverage have a lot to do with the tools that were used by some of the people that popularized this.

Some of my day involves writing Ruby. I find using Ruby without 100% code coverage to be like handling a loaded gun: I can track many outages to things as silly as a typo in an error handling branch that went untested. A single execution isn't even enough for me: I need a whole lot of testing on most of the code to be comfortable.

When I write Scala at work instead, I test algorithms, but a big percentage of my code is untested, and it all feels fine, because while not every piece of code that compiles works, the kind of bugs that I worry about are far smaller, especially if my code is type heavy, instead of building Map[String,Map[String,Int]] or anything like that. 100% code coverage in Scala rarely feels as valuable as in Ruby.

Also different styles make the value of having tests as a way to try to force good factoring changes by language and paradigm. Most functional Scala doesn't really need redesigning to make it easy to test: Functions without side effects are easy, and are easier to refactor. A deep Ruby inheritance tree with some unnecessary monkey patching just demands testing in comparison, and writing the tests themselves forces better design.

The author's code is Java, and there 95% of the reason for testing that isn't purely based on business requirements comes from runtime dependency injection systems that want you to put mutability everywhere. Those are reasons why 100% code coverage can still sell in a Java shop (I sure worked in some that used too many of the frameworks popular in the 00s), but in practice, there's many cases where the cost of the test is higher than the possible reward.

So if you ask me, whether 100% code coverage is a good idea or not depends a whole lot on your other tooling, and I think we should be moving towards situations where we want to write fewer tests.

But remember nothing is free, nothing is a silver bullet. Stop and think.

I'm going to be the one to point at the elephant in the room and say: Java. More precisely, Java's culture. If you ask developers who have been assimilated into a culture of slavish bureaucratic-red-tape adherence to "best practices" and extreme problem-decomposition to step back and ask themselves whether what they're doing makes sense, what else would you expect? These people have been taught --- or perhaps indoctrinated --- that such mindless rule-following is the norm, and to think only about the immediate tiny piece of the whole problem. To ask any more of them is like asking an ostrich to fly.

The big error being made in this article (and most of the comments here) is the assumption that the purpose of unit tests is to "catch bugs." It isn't.

The purpose of unit tests is to document the intended behaviour of a unit/component (which is not necessarily a single function/method in isolation) in such a way that if someone comes along and makes a change that alters specified behaviour, they are aware that they have done so and prevented from shipping that change unless they consciously alter that specification.

And, if you are doing TDD, as a code structure/design aid. But that is tangential to the article.

Unit tests are a poor substitute for correctness. Many unit tests does not a strong argument make.

Unit tests are typically inductive. Developer shows case A, B and C give the expected results for function f. God help us if our expectations are wrong. So, you're saying since A, B and C are correct therefore function f is correct. Well that may be, or maybe A, B and C are trivial cases, in other words, you've made a weak argument.

100% test coverage sounds like lazy management. Alas, the manager may have worked their way via social programming rather than computer programming. In such cases, better to say you have 110% test coverage.

I've been programming for a living since 1996, and only recently started to do TDD in the normal sense of writing unit tests before writing code. I've found to it to be an enormous help with keeping my code simple - the tests or the mocking getting difficult is a great indicator that my code can be simplified or generalised somehow

I argued for functional instead of unit testing for years, but it was only when a team-mate convinced me to try unit testing (and writing the tests FIRST) that the scales fell from my eyes. Unit testing isn't really testing, it's a tool for writing better code.

BTW from an operational perspective I've found it's most effective to insist on 100% coverage, but to use annotations to tell the code coverage tool to ignore stuff the team has actively decided not to test - much easier to pick up the uncovered stuff in code review and come to an agreement on whether it's ok to ignore

A lot of people here seem to have strong opinions against 100% coverage, so I'll risk their ire with my strong opinion in favor.

If you have, say, 95% coverage -- and most corporate dev orgs would be thrilled with that number -- and then you commit some new code (with tests) and are still at 95%, you don't know anything about your new code's coverage until you dig into the coverage report. Because your changes could have had 100% coverage of your new thing but masked a path that was previously tested; or had 10% but exercised some of the previously missing 5%.

If you have 100% coverage and you stay at 100% then you know the coverage of your new code: it's 100%. Among other things this lets you use a fall in coverage as a trigger: to block a merge, to go read a coverage report, whatever you think it warrants.

Also, as has been noted elsewhere, anything other than a 100% goal means somebody decides what's "worth" testing... and then you have either unpredictable behavior (what's obvious to whom?) or a set of policies about it, which can quickly become more onerous than a goal of 100%.

It's important to remember that the 100% goal isn't going to save you from bad tests or bad code. It's possible to cheat on the testing as well, and tests need code review too. There's no magic bullet, you still need people who care about their work.

I realize this might not work everywhere, but what I shoot for is 100% coverage using only the public API, with heavy use of mock classes and objects for anything not directly under test and/or not stable in real life. If we can't exercise the code through the public API's then it usually turns out we either didn't rig up the tests right, or the code itself is poorly designed. Fixing either or both is always a good thing.

I don't always hit the 100% goal, especially with legacy code. But it remains the goal, and I haven't seen any convincing arguments against it yet.

Maybe I'm dense, but this code raises at least one question that I would prefer to see answered by tests.

The parameter watchlists appears to be defined in a scope above the one under test. What happens if watchlists is null for some reason? What should be the behavior?

Then there's the tricky question of what to do as this method evolves. Next month, a watchListRow might need to be updated with a value before being added to watchlists. Later, a check might be added to ensure some property exists on watchListRow. At what point will a test be written for this method?

I wish people cared more about the craft of an amazing plugin architecture or an advanced integration between a machine learning system and a UI, but no, more and more of our collective development departments care more about TDD and making sure things look perfect. Don't worry about the fact that there are no integration tests and we keep breaking larger systems, and while there might be 100% code coverage, no developer actually understands the overall system.

I've seen projects where management had rules like "you must have 70% code coverage before you check in". Which is crazy, for a lot of reasons.

But the developer response in a couple cases was to puff the code up with layers of fluff that just added levels of abstraction that just passed stuff down to the next layer, unchanged, with a bunch of parameter checking at each new level. This had the effect of adding a bunch of code with no chance of failure, artificially increasing the amount of code covered by the tests (which, by the way, were bullshit).

I got to rip all that junk out. It ran faster, was easier to understand and maintain, and I made sure I never, ever worked with the people who wrote that stuff.

If you can prove that your testing process is perfect, then your entire development process can then be reduced to the following, after the test suite is written:

cat /dev/random | ./build-inline.sh | ./test-inline.sh | tee ./src/blob.c && git commit -Am "I have no idea how this works, but I am certain that it works perfectly, see you all on Monday!" && git push production master --force

When presented like this, relying on human intelligence and experience doesn't seem like such a bad thing after all.

- Don't overspecify your tests. Test only publicly specified parts of the contract. Things that you need to be true and that the callers of the module expect to be true. And yes, you will change the test when the contract changes.

I once joined a company that had 90% code coverage. After a while it became clear that there were all vanity tests: I could delete huge swathes of code with zero test failure. We let the contractors that wrote it move on, and we formed a solid team in house. We don't run code coverage any more because it makes the build run four times slower. Instead, I trust our teams to write the good tests. Sometimes that means <100% coverage, and the teams are able to justify it.

Some feedback on the article:

>Test-driven development, or as it used to be called: test-first approach

Test-first is not the same as Test-Driven. The test-first approach includes situations where a QA dev writes 20 tests, and then hands them to an engineer who implements them. Thats not TDD.

>"But my boss expects me to write test for all classes," he replied.

That's very unlikely to be TDD. "Writing tests because I've been told to" is never likely to be "I'm writing the tests that I know to be necessary", and that's all TDD is: writing necessary tests. If the test isn't necessary, then neither is the code.

>Look, if that imaginary evil/clueless developer comes and breaks that simple code, what do you think he will do if a related unit test breaks? He will just delete it.

Sure. But then their name is on that act in the commit log. The test is a warning. I've been lucky not to have worked with evil developers, but I have worked with some clueless ones, and indeed some have just deleted tests. Thats an opportunity for education, and quality has steadily improved.

>The tragedy is that once a "good practice" becomes mainstream we seem to forget how it came to be, what its benefits are, and most importantly, what the cost of using it is.

Totally agree. So many programmers and teams practice cargo cult behaviors. Unfortunately, this article is one of them: making claims about TDD, and unit tests in general, without understanding "why" TDD is effective.

I think pursuing 100% test coverage is not a fixed state, it is a must have process to learn how to write tests.

Think about one question first: why did the manager force develop to achieve 100% coverage?There must have some benefits, or the manager might come from the competitor.When standing at a higher position, think of time and organization factors, it might be a good choice.If every engineer in the corporate has the deeply understanding of test coverage as the author, they really do not need to pursue 100% coverage.But in reality, we can see many companies which do not pursue test coverage, their coverage tend to be 0. That's why we need force 100% test coverage in a short time. Engineers need time to form the habit of test their code, and then experience the pain of bad tests. Then they start to think what kind of tests are valuable.

Naming the test just "initialise" is not very useful as it doesn't assert what you expect the method under test to do. Given that the purpose of the initialise function is to populate a watchlists collection variable from the parameter, i'd name the test something like "initialise_daoRecordCountIs9_watchlistCountIs9". The pattern I generally use is <method_name>_<assertion_under_test>_<expected_result>.

Then, my test would be the following:

* Set up / mock the dao parameter to have 9 rows

* Create an instance of the class under test and push in the dao parameter

* Verify / Assert that the class under test now has 9 items in the watchlists variable - I'm assuming there is a public method to access that.

I feel like this high test coverage thing can only work if you have tight modules, tight interfaces, and you only bother testing at module boundaries. So the test cases almost function as a bit of executable API documentation - here's the method name, here's what it does, here's the contracts and/or static types, and.... given this input, you should get this output.

Do it for the high level bits you actually expose. If you're exposing everything, tests won't really save you - architecture and modularity are more fundamental and should be tackled first. If you're writing a big ball of mud, what benefit do you get testing a mudball?

100% code coverage and even TDD'd doesn't and shouldn't mean 100% unit tested. Glue code and declaration doesn't need a unit test. Some functional tests should provide all the coverage needed to give you confidence to refractor that code in the future.

Edit: while I'm a huge TDD advocate, I'm not a big advocate of measuring code coverage. That should only be necessary if you are trying to get a code base under coverage that wasn't TDD'd. Even then I'd rather add the coverage as I'm touching uncovered code. If it works and I'm not touching it, it doesn't need tests.

There's a human tendency to overemphasize things you can quantify. So we try to figure out how to test every code path rather than what we should do: try to figure out which inputs we should test against.

Agreed. I would rather have 5% test coverage that checks against all risky edge cases/inputs than 100% test coverage that checks against arbitrary, low-risk inputs.

Writing tests to confirm the simplest, most predictable use cases is a waste of time - Those cases can be figured out very quickly without automated testing because they are trivial to reproduce manually.

Having 100% code coverage is like having 0 warnings (although it certainly is a lot harder).In this situation, your tools are not telling you "all's good", but rather "I can't detect anything suspect here".

There's a good chance that the dev time needed to go from 90% coverage to 100% coverage might be better spent somewhere else.

One point already made by several people on this thread is that code coverage, while helpful, is not enough (and perhaps is not even the best bang for the buck).

In hardware verification (where I come from, and where the cost of bugs is usually higher), "functional coverage" is considered more important. This is usually achieved via constraint-based randomization (somewhat similar in spirit to QuickCheck, already mentioned in this thread).

Back when I learnt Haskell we had a lecturer named John Hughes who had co-authored a tool named QuickCheck[1]. We used this tool extensively throughout the course, with it testing were quite simple and writing elegant generators were a breeze. In my experience, these test did a much more greater job at finding edge cases then many unit tests I've seen in larger close to full coverage TDD projects.

As with much else TDD should be a tool with the ultimate goal of aiding us in writing correct and less bug riddled code, once the tool adds more work it's no longer offering much aid.

This remembers me of recent projects where developers started to mock every piece of code.. The result was that all tests passed while the codebase exploded in real environments.

In my opinion the best advice is to force developers to use their brains. I know, there are a lot of sh*tty CTO/CEO/HoIT/SomeOther"Important"Position people out there seeing them as code monkeys and saying that developers are not paid to think but in that case the best thing developers could do is learn to say "NO"... My experience with that kind of people is that they need to learn the meaning of "NO" instead of wasting time and money in the end of the day.

But I know a lot of people in the early days of XP went to extremes, 100% code coverage, mutation tools for every condition to ensure unit tests broke in expected ways, etc. But they were more experiments in pushing the limits rather than things that gave productivity gains.

IMO, if I ever have 100% code coverage, I did something wrong. The best I can usually achieve is 95-98%, because of my defensive coding to warn about the "impossible" use cases.

Escape a `while True` loop? Log it, along with the current state of the program, and blow up (so we can be restarted). Memory allocation error? Log it. The big "unexpected exception" clause around my main function? Log it.

I don't think I know anyone that do TDD. Uncle Bob has indoctrinated a few zealots into that mindset, but it all comes off as crazy to me. A germ of a good idea taken way too far.

People of that school tend to write tests that test implementation rather than functionality. As a result you get fragile tests that break not telling you what went wrong but how the implementation has changed.

Good tests should test behavior. A change in implementation shouldn't break the test.

I have 100% code coverage on a couple of projects. It has two benefits:

Behaviour is completely covered by tests, so changes in APIs which might break consumers of the library will at least be detected.

New work on the library tends to follow the 100% coverage by convention, so it's somewhat easier to maintain. Apps that have 90% coverage, for example, tend to slip and slide around. Having 100% coverage projects the standard "If your contribution doesn't have 100% coverage it won't be accepted". I don't think this is a bad default position.

I noticed the author was speechless in two situations, both of which involved "but we write all our tests in <test-framework>." This is legitimate and should be taken more seriously by the author.

Codebases serve businesses and businesses value legibility over efficacy. It's more important to them to have control over their assets than to have better assets. Using one test framework is in perfect service of that goal.

It's inefficient in that it will take future developers more time to understand that code. But fewer architectural elements means that you can get by with less senior programmers.

Imagine if you went onto a software project and they were using 6 different databases because every time they had a new kind of data that they wanted to access differently, they reached for another database rather than use the one they had.

Of course nobody would ever do that, well I hope anyway, but I do see a lot of unnecessary architectural complication in projects in service of "using the right tool for the job." And it can balloon. A new test framework has to work in your CI framework. You need to decide how to handle data. It's not a huge decision, but it's more complicated then most devs would think and it'll take up more of your time than you'll expect.

You can generalize this to the the main thrust of the article. 100% code coverage is not a bad goal to want to hit. Sure, you're going to get a lot of waste. But you're not paying for it, your employer is. And your employer might have a different idea of which side of the tradeoff he wants to be on and where to draw the line. You know the code way better than they will, but they know the economics far better than you ever could.

Write a test if you don't feel confident that a piece of code does what you think it does. If you're not sure what it does now, there's little chance that you or anyone else will in the future, so write a test to understand it and to make that understanding explicit.

Automated tests are code, and come with all the engineering and maintenance concerns of 'real' code. They don't do anything for your customers, though, so are only appropriate when they actually make your work faster or safer.

Automated tests are a spec, and are exactly as hard to write completely and correctly, and as easy to get wrong in ignorance, as a 'real' spec. If you find them easy to write, odds are good you would find the code easy to visually verify as well - which is to say, you're working on a trivial problem.

They have their place, but that place is not everywhere. It is where they are efficient and valuable. I particularly look for places where they are like the P half of an NP problem, an independent estimate of the answer to a math problem. If you ever find yourself writing the same code twice, unless it's a safety-critical system or something, that's a moment to stop and reflect on the value of what you are doing.

The title does not mean anything and is basically a click bait in my opinion as it sounds cool to trash some ideal.That said, the magic 100% number is far removed from reality and does not represent anything by itself.

100% coverage on project of which size? Imagine you have a single script project that does exactly one thing and 2 test are enough to verify that it works without doing it manually? That is not the same as writing test for file system or tests which consists mostly of mocks upon mocks upon mocks.

I think the real problem is someone comes up with an idea, like TDD, tells people about it, some people hear about, start preaching it, some people start believing it and nobody actually think things through, usually cos they don't have experience (it's not a fetish as someone said). Like everything in life, you have to think things through before doing them, ask your self is this worth doing and when it is worth doing. You can't just say: "Oh we are doing TDD thus everything must be done in TDD way".

For people that say tests are useless, or good code does not need tests, I ask, when you make a change do you still make sure your code works by hand? And if you do make sure, why don't you automate that? You are a programmer after all.

And for those that say you need to test everything, well you don't, specially if you need mock most of it or it is really not that important piece of code as it is dev tool or something. What you want to make sure works is customer/user facing stuff that must work for you to get paid and you want to be able to verify this at any time of day without losing hours clicking around checking for stuff.

So this is not straight forward, 100% means nothing without context and doing anything in excess and without valid reasons is pointless or even harmful. And this has nothing to do with programming but life in general.

>Testing is usually regarded as an important stage of the software development cycle. Testing will never be a substitute for reasoning. Testing may not be used as evidence of correctness for any but the most trivial of programs. Software engineers some times refer to "exhaustive" testing when in fact they mean "exhausting" testing. Tests are almost never exhaustive. Having lots of tests which give the right results may be reassuring but it can never be convincing. Rather than relying on testing we should be relying in reasoning. We should be relying on arguments which can convince the reader using logic. http://www.soc.napier.ac.uk/course-notes/sml/introfp.htm

Code coverage is an illusion, since what you want is actually "possible states coverage". You can cover all the lines of your code and still cover a minority of the possible states of the program, and especially a minority of the most probable states of the program when the actual users execute it, or you can cover 50% of your lines of code and yet cover much more real world states, and states for which it is more likely to find a bug. I think that more than stressing single features with unit tests, it is more useful to write higher level stress tests (fuzz tests, basically) that have the effect of testing lines of code as a side effect of exploring many states of the program. Specific unit tests are still useful, but mostly in order to ensure that edge cases and main normal behavior corresponds to the specification. As in everything it is develooper's sensibility that should drive what test to write.

I recently broke a unit test by adding one entry to a hash constant (a list of acceptable mime types and their corresponding file extensions). I looked at the test, and it was just comparing the defined constant, to a hardcoded version of itself.

I rewrote the test by converting the constant to a string, taking a checksum of it, and comparing _that_ to a short hardcoded value. Now the test is just 1 line of code, instead of 41! Then I put it through code review, and my team said "What a ridiculous test." But they didn't see any problem in the previous version that compared it to a 40-line hardcoded hash.

I think I'd rather focus on documenting the information flow. Of having the tools to track down where things start to go wrong when there's a problem and I ask things to run with more verbosity.

Initial "complete coverage" should probably start from mockups that test an entire API. The complete part should be that, in some way, the tests cover expected successes AND failures (successfully return failure) of every part of the API, but there's no need to test things individually if they've already been tested by other test cases.

Invariably reality will come up with more cases and someone will notice an area that wasn't quite fully tested. That's where a bug exists, but the golden test cases probably wouldn't have located it anyway. It'll take thousands or millions of users to hit that combination and notice it. Then you get to add another test case while you're fixing the problem.

Property-based testing has made testing more productive and fun for me. You write a few lines of code that produce a large amount of tests. The idea is obviously so useful, I'm surprised it's uncommon in practice. When you think about coverage in terms of inputs applied instead of statements executed, property-based testing is far more productive than writing tests by hand.

It's not a silver bullet though. Some property-based tests are easy to write but offer little value. Sometimes you spend more time writing code to generate the correct inputs than the value of the test warrants. It has a learning curve. Still, I think it is the most powerful tool you can master for testing.

How I do it is going from rough testing of pages and components to granular testing of those parts which had some error.

For pages, I just run them to see if they display without producing errors, same goes for critical components. This gets me the feeling of roughly tested and from the user perspective working system with little time investment.

Then I test critical business logic, but usually only after some error was reported.

Mind though that I am freelance developer unconstrained by organizational rules.

Many of us have made similar mistakes (especially early in our careers) when taking on new techniques for which we became particularly enthralled. That's why it's a good idea to have a couple 'elders' on-staff so as to not allow youthful passion to wreck havoc. They tend to keep teams pragmatic and lazy (a good thing, in programming).

For instance, I remember all the bad code that I wrote and read circa 1997-1999, after design patterns became the rage.

While 100% code coverage doesn't guarantee 0% bug, it's useful to easily detect new untested code addition and possible bug addition. Another point is that the code looks obviously right by visual inspection, but we want to automate the check. Relaxing the 100% coverage is a lazy slippery slope I don't take with my code.

The danger of 100% percent coverage is that the goal of tests becomes the 100% code coverage and not bug detection anymore.

One of the pressures for 100% coverage is working in a non-typesafe language. The gospel of coverage largely evolved in the Ruby community, where I often see test suites that look like a handrolled typechecker.

I find that, as I'm building something from scratch, the vast majority of the errors I make are just things I didn't think of. Tests don't help there because I can't test on input that I don't even imagine happening. So I generally write few tests, because, to be honest, most code is trivial and algorithm-light. Sure, if I have to write a parser or something a bit more fiddly, I'll write a unit test to be sure that it's doing what I expect, but that tends to be the exception, not the rule. I do write my code with an eye toward later testability if it turns out to be necessary, but I find that to be fairly easy, and also a good measure of if I'm doing the write thing: most code that isn't testable is probably code that's difficult to read and maintain, anyway, so if I look at something and think "oof, how would I ever write a test for that?" I'll usually delete it and start over.

When I have something that should be working, I test it in a more functional/integrative manner, and move on.

Later, I'll write unit tests when I need to. If I want to refactor something, or drastically change the implementation of something, I'll write out some tests beforehand to be sure that the pre and post behaviors match.

I've always thought that TDD is just premature optimization. You're optimizing for the idea that you -- or someone -- will later need to make large enough changes to your code that you'd worry about breaking it. In my experience that's fairly rare, and you spend less time overall if you just write the tests as you need them, not up-front. Yes, writing a test when the code is fresh in your mind will be faster than writing it much later, but then you're writing a ton of test code that likely won't be necessary.

An objection I hear to this is that you're not just writing tests for yourself, you're writing tests for the others who will need to help maintain your code, perhaps after you're gone. I'm somewhat sympathetic to this, but I would also say that if someone else needs to modify my code, they damn well better first understand it well enough such that they could write tests before changing it (if they deem it necessary). Anything else is just irresponsible.

(Note that I primarily work in strongly statically typed languages. If I were writing anything of complexity in ruby/python/JS/etc., I don't think I'd feel comfortable without testing a lot of things I'd consider trivial in other languages.)

(Also note that some things are just different: if you're writing a crypto library, then you absolutely need to write tests to verify behaviors, in part because you're building something that must conform to a formal spec, or else it's less than worthless.)

Striving for 100% coverage is an expensive mistake because as a testing indicator it gives you a false sense of security. But someone has to pay for the time spent writing and maintaining those tests, and fixing the bugs that are still there.

I much prefer to use code coverage as a weak indicator for finding dead code.

I'd suggest the tragedy here is an absence of kaizen and team processes that foster continuous improvement. If folks are doing inefficient things, that should be caught by the team in a retro or similar.

The FBI director is supposed to have a 10 year term. That went in after J. Edgar Hoover died. Nobody wanted another J. Edgar Hoover FBI Director for Life situation, but having the FBI director be a "pleasure of the President" appointment made it too political.

This makes Andrew G. McCabe acting FBI director. He's in the civil service, not a Presidential appointment. He was an FBI agent and worked his way up. From what little is available about him, he seems to be good at the job.[1] As civil service, he can only be fired for cause.

Appointing a new FBI director requires Congressional approval, and will be controversial.

Dear Director Comey: I have received the attached letters from the Attorney General and Deputy Attorney General of the United States recommending your dismissal as the Director of the Federal Bureau of Investigation. I have accepted their recommendation and you are hereby terminated and removed from office, effective immediately. While I greatly appreciate you informing me, on three separate occasions, that I am not under investigation, I nevertheless concur with the judgment of the Department of Justice that you are not able to effectively lead the Bureau. It it essential that we find new leadership for the FBI that restores public trust and confidence in its vital law enforcement mission. I wish you the best of luck in your future endeavors.

"Trump just fired the man leading a counterintelligence investigation into his campaign, on the same day that the Senate Intelligence commitee requested financial documents relating to Trump's business dealings from the treasury department that handles money laundering." -Comment from reddit that sums up how strange this is.

Say what you want about the politics, but it's inarguable that Comey, whether he wanted to or not, had become a partisan lightning rod for both sides. The unbiased credibility of the FBI was at stake with Comey at the helm, and this is probably a good move for the country.

Mods, please let this one live. This is big news and we can't ignore it. I don't care what the policies are about political stories. I also don't care if I can go somewhere else to read about it. I want to know what _this_ community's opinions are on the matter.

Well I would be certainly interested in the circumstances especially considering that I always believed he was pro trump. Some even said he played an important role Trump won the election because he opened an investigation into Clinton's emails right before the election.

Comey's book deal is going to be enormous. His great, great, great grandchildren will be buying Maserati's with the proceeds. He just needs to withstand another six months of testifying on the hill in front of at least two standing committees and probably a special committee.

President Trump cares little about protecting the Office of the President...his administration has a well-documented history of putting the thumb on the scale regarding the investigation of collusion between his campaign and Russian agents/agencies...this is damaging the credibility in the office...this firing was also clearly decided on and then the rationale was secured afterward...it baffles the mind that Trump rationalizes this executive action by claiming that Comey was "mean to Clinton" when only a few days ago Comey had his trust...the reasoning cited, and involvement of Sessions in interfering an investigation that he recused himself from, is bogus... It is not unreasonable to claim that a cover-up is in full swing!

This and Comey's recent misstatements to Congress about Huma Abedin forwarding sensitive emails to Anthony Weiner are alone grounds for Trump to fire Comey. Whether Trump had other motives... I mean, who knows? It's all speculation.

The thought of Trump nominating an FBI director is bone chilling. Summed up with what's known about Flynn and every other suspicious data point we have what I am increasingly sure that is a modern day coup of the USA.

Time to pause tech and effect change, this is leading to a future darker than I can possibly contemplate.

Hopefully there's a silver lining and that this means the encryption backdoor push (led by Comey) will slow to a crawl or be forgotten. He was already preparing a push for FISA Amendments renewal together with Dianne Feinstein (who is apparently having a change of heart about her own retirement).

Trump had to dismiss Comey. Comey damaged the FBI in his recent sessions with Congress to the point the FBI was on the defensive trying to set the record right. Considering the erratic behavior with both the Clinton and Russia issues it is doubtful that Comey was capable of continuing in such an office.

Like or dislike Trump, there have been many on the Democratic Party side calling for Comey to be gone and the odd part is many are now rushing to the guy's defense. That and he was fired over incorrect testimony about a Clinton aide, testimony that painted her in a worse position than deserved.

Irrational is the best way to describe the reaction of many. I was really shocked by some in the press, it is near impossible to separate journalist from opinion editors when they cannot separate the roles themselves

Hello everyone! Author here. I didn't expect anyone to find this repo, much less post it on Hacker News!

This project is inactive for two main reasons:

- SQLite is not a great general-purpose SQL engine. Poor performance of joins is a serious problem that I couldn't solve. The virtual table support is good but not quite good enough; not enough parts of the query are pushed down into the virtual table interface to permit efficient querying of remote tables. Many "ALTER" features are not implemented in SQLite which is a tough sell for experimental data manipulation.

- T-SQL, the procedural language I chose to implement atop SQLite, is not a great general-purpose programming language. Using C# in LINQpad is a more pleasant experience for experimentally messing around with data. R Studio is a good option if you need statistical functions.

I think several good solutions in this problem space exist. A local install of SQL Server Express can be linked to remote servers, allowing you to join local tables to remote ones. That setup serves nearly all of SQL Notebook's use cases better than SQL Notebook does. LINQpad is also very convenient for a lot of use cases.

I appreciate the interest! I may spin off the import/export functionality into its own app someday, as I had a lot of plans in that area, but I think SQL Notebook as it stands is a bit too flawed to develop fully.

I recently had to teach a series of workshops on SQL and I was trying to figure out the best system to allow students to independently work with small datasets without having to install any software. I found Alon Zakai's absolutely fantastic version of SQLite in JavaScript here:

Ouch, that would be very useful to me had I known about it two months ago, when I was exploring the database dump from my old Wordpress blog (I'm finalizing the process of re-launching it as a static site). I managed though, by combination of MySQL Workbench and Common Lisp REPL.

Anyway, bookmarking for the next time I'll need to play with relational data.

I get something of this experience in Emacs via `org-mode`, `sql-mode`, and `ob-sql-mode` minus the data-importing functionality... though with babel it's probably doable in a code block using a script.

In my daily work I often have the need to analyze excel and csv files from clients. I use http://harelba.github.io/q/ and it worked most of the time. But this one seems promising. Especially being able to query data from a file and join with data from a database.

Is there any Windows SQL software that can use system/machine ODBC data sources? My company uses OpenLink's ODBC drivers to access our main database (Progress OpenEdge.) I have no problem using Python, Pandas, and pyodb to connect to the data base but it isn't the best environment to develop queries.

Really nice article! Succinctly demonstrates the problem with not using premultiplied alpha.

> As an Artist: Make it Bleed!

> If youre in charge of producing the asset, be defensive and dont trust the programmers or the engine down the line.

If you are an artist working with programmers that can fix the engine, your absolute first choice should be to ask them to fix the blending so they convert your non-premultiplied images into premultiplied images before rendering them!

Do not start bleeding your mattes manually if you have any say in the matter at all, that doesn't solve the whole problem, and it sets you up for future pain. The only right answer is for the programmers to use premultiplied images. What if someone decides to blur your bled transparent image? It will break. (And there are multiple valid reasons this might happen without your input.)

Even if you have no control over the engine, file a bug report. But in that case, go ahead and bleed your transparent images manually & do whatever you have to, to get your work done.

Eric Haines wrote a more technical piece on this problem that elaborates on the other issues besides halo-ing:

> Even with an alpha of 0, a pixel still has some RGB color value associated with it.

Wish the article was more clear as to why this happens. Let me elucidate: this happens because, per the PNG standard[0], 0-alpha pixels have their color technically undefined. This means that image editors can use these values (e.g. XX XX XX 00) for whatever -- generally some way of optimizing, or, more often than not, just garbage. There are ways to get around this by using an actual alpha channel in Photoshop[1], or by using certain flags in imagemagick[2].

This is extremely useful to take advantage of (that you can store RGB values in 0-alpha pixels). I've written some pretty simple but powerful shaders for a game I'm working on by utilizing transparent pixels' "extra storage" which allowed for either neat visuals or greatly reduced the number of images required to achieve a certain affect. For instance, I wrote a shader for a characters hair that had source images colorized in pure R, G, and B and then mapped those to a set of three colors defining a "hair color" (e.g. R=dark brown, G=light brown, B=brown). If I didn't have the transparent pixels storing rgb nonzero values, the blending between pixels within the image would jagged and the approach would have been unacceptable for production quality leading to each hair style being exported in each hair color. As a total side note I really enjoyed the markup on the website. Seeing the matrices colored to represent their component color value is really helpful for understanding. Nice job author!

I don't like this article because it blames the wrong people and buries the real solution, premultiplied alpha, at the bottom. Already there are many comments here that are confused because they didn't even see the premultiplied alpha part of the article.

The issue with the Limbo logo was not that the source image was incorrect. The image was fine. The blending was incorrect because the PS3 XMB has a bug. Not using premultiplied alpha when you are doing texture filtering is a bug.

While reading this article, it struck me that the amount of "useless" data increases as the alpha value approaches 0. For example: in a pixel with rgba values of (1.0, 0.4, 0.5, 0.0), the rgb values are redundant. Is there a color format that would prevent this redundancy? Perhaps by some clever equation that incorporates the alpha values into the rgb values? I don't think Premultiplied alpha would work, because you still need to store the alpha value for compositing later...

Premultiplied alpha is also more "correct" in that it separates how much each pixel covers things behind it (the alpha value) from the amount of light it is reflecting or emitting (the color values). These two values should really be interpolated separately, and that's what premultiplied alpha gives you.

Premultiplied alpha results in less color depth, though. If my alpha is 10%, then my possible RGB values become 0-25. Even if I multiply by 10, I still lose the maximum possible values 251-255, and only values 0, 10, 20, 30... 250, are possible.

The correct solution is to pay close attention to all of the factors... and to be ESPECIALLY aware of pixel scaling. Provide your RGBA textures at the 1:1 pixel scale they will be rendered (or higher!) if at all possible.

You also have a similar problem when you render opaque, rectangular images without the clamp edge mode, and the renderer is in tiling mode, so the borders wrap around when your picture is halfway between pixels and become a mix between the top/bottom or left/right colour, corrupting the edges. Easy to fix, but annoying until you get what it is that corrupts your edges.

Also: "The original color can still be retrieved easily: dividing by alpha will reverse the transformation."

C'mon, you can't say that and then make an example with alpha=0. Do you want me to divide by zero? The ability to store values in completely transparent pixels is lost.

It's almost irresponsible to write an article on this topic in 2017 without explicitly mentioning bufferbloat or network-scheduling algorithms like CoDel designed to address it. If you really want to understand this article, read up on those first.

This is the story of how members of Googles make-tcp-fast project developed and deployed a new congestion control algorithm for TCP called BBR (for Bandwidth Bottleneck and Round-trip propagation time), leading to 2-25x throughput improvement over the previous loss-based congestion control CUBIC algorithm.

Network performance across national borders within China has been abysmal since the censorship got much more serious. BBR seems promising, so more and more people (that includes me) who bypass GFW with their own VPS has been deploying BBR, and seen marvelous results.

It seems like the best way to handle this situation is to assume that all other algorithms are hostile, and to seize as much bandwidth as you can without causing queue delay. That would reduce the problem set to a basic resource competition problem, which could then be solved with genetic algorithms.

> Ive actually felt slightly uncomfortable at TED for the last two days, because theres a lot of vision going on, right? And I am not a visionary. I do not have a five-year plan. Im an engineer. And I think its really I mean Im perfectly happy with all the people who are walking around and just staring at the clouds and looking at the stars and saying, I want to go there. But Im looking at the ground, and I want to fix the pothole thats right in front of me before I fall in. This is the kind of person I am. - Linus Torvalds @TED[1]

You could say the same thing about any non-glamorous/lucrative position.

"Garbagemen make the world go round. Without them, we would drown in our own filth"

"Nannies make the world go round. Without them, half the workforce would be stuck at home"

"Auto mechanics make the world go round. Without them, we would have no way of getting places"

Ultimately, all such arguments are inane and pointless because every single job that exists in society

A) is important to the people paying for it

B) has wages that are based on both the importance of the job and how easy it is to find someone capable of doing it

C) The idea of glamorizing any job, and allowing yourself to be influenced by a job's glamor-rating, is just superficial drivel. Don't judge yourself or others by the job listed on their business card. If you feel the need to judge someone at all, judge them by the impact that they, as an individual, are making in the world.

"If the President had picked me to predict which country [in postwar Europe] would recover first, I would say, 'Bring me the records of maintenance.' The nation with the best maintenance will recover first. Maintenance is something very, very specifically Western. They haven't got it in Russia. If I got in there in the warehouse, let's say, and I saw that the broom had a special nail, I would say, 'This is the nail of immortality.'" - Eric Hoffer

You might say I've observed this many times in my career. I think the best move career-wise is to be one of the people who makes the new thing. Those who clean up after them, bear the brunt of their design flaws and careless mistakes, will never be recognized, appreciated, or remunerated as well. At least in my experience.

Ask yourself this: who is the most famous maintainer you can think of? (Not someone who devised an innovation and then maintained it - pure maintenance)

There is probably a distribution here that matters - without maintenance there is no foundation for innovation, without innovation there is no motivation to maintain - man wants to produce and consume newer ideas, materials, tools, items etc.

There's a great piece in the Lapham's quarterly about maintaining NYC's infrastructure and how without any maintenance, NYC would be replaced by forest cover within 200 years. Can't find it right now but it is a great read trust me.

No, it's precisely the opposite! Maintainers (using whale oil for energy) would have driven every whale to extinction and left the world without an energy source a century ago. Maintainers (using horses for travel) would have drowned Manhattan in horse shit a century ago.

Innovation is if anything under-rated and under-funded and under-supported. The homes of hundreds of millions of people and energy itself is threatened by depleting fossil fuels and global warming...and some of the major efforts to stop this have depended on effectively "insane" entrepreneurs like Elon Musk...not a smart system! All the while hundreds of billions of dollars in health care costs for just say unnecessary tests flows to negative value addition maintainers.

Maintainers mostly either conservatively follow & accept or exploit the current system. It's innovators who've driven down the cost of lighting your home to a few hours of income or the ubiquity & cheapness of books & information (perhaps to the detriment of wisdom but that's another story) to stopping war through protest to ending non-man-made famine.

I found the thesis interesting and plausible: that innovation is fetishized.

But on a slight tangent I wondered whether the "innovation" that they complain about is a particular variant, one that we're all familiar with here: a pseudo-libertarian start-up variant

"Innovation ideology is overvalued, often insubstantial, and preoccupied with well-to-do white guys in a small region of California"

It seems easy to argue against this tired representative of innovation.

By contrast there are those that would argue that most of the major technological and scientific gains have arisen, not from these VC hype-machines, but from large-scale state planning and investment. One of the best expositions of this argument is from economist Mariana Mazzucato: https://www.youtube.com/watch?v=yPvG_fGPvQo

Wouldnt maintenance lead to innovation analogous to necessity being the mother of invention?

Most of the inventions came up as an easier or better way of doing something which was the current maintainer(to be innovator?) decided to rework/create .

And if the author talks about only maintenance where no development can be done(even those that make the life of maintainer easier).IMHO , Maintenance procedures should also be constantly improved and hence that would lead to innovation.

I think that is the reason that we hold innovators in high regard. People are very bad at it and it happens so infrequently. We rarely get the right person when we hand out credit so such idolization is usually meaningless, but I suppose in the long run that is not important. We have this irrational need to attach a person to the idea.

So we end up failing to properly credit any of the people that make and keep our civilization...

Or as I like to say, "incompetence makes the world go round". Extracting little glimpses of functionality out of a chaotic mess is a challenging, at times satisfying and definitely valuable exercise that keeps many people at work...

Maintenance is not the opposite of innovation, it is the opposite of good design.

There's an analogy here vis-a-vis tradition/progress. In order to be reasonably sure that a change is an improvement, you must understand what you're changing and how. To borrow a Chestertonian example, if you encounter a fence and you don't know why it's there, find out why before you remove it. Maintainers are in the best position to understand the impact of making changes, and because of that, they're able to function as either advisers or as "innovators" by knowing where improvements can be made and having the knowledge to understand why they're improvements.

The maintainers! I know a couple people connected to this group - heard great things about their 2nd conference last month.

The premise is great. From Russel's article on Aeon:

"We organised a conference to bring the work of the maintainers into clearer focus. More than 40 scholars answered a call for papers asking, What is at stake if we move scholarship away from innovation and toward maintenance? Historians, social scientists, economists, business scholars, artists, and activists responded. They all want to talk about technology outside of innovations shadow."

OT, but this site looks just great with Javascript turned off (as I usually do with all the "trendy"-looking longreads, as they tend to be processor hogs). Even animations on the title screen. Awesome front-end job.

"With the Alexa App, conversations and contacts go where you go. When youre away from home, use the app to make a quick call or send a message to your familys Echo. Alexa calling and messaging is freeto get started download the Alexa App."

Its interesting to see how fast Amazon can come to market with these new hardware pieces. I guess the fallout of the Amazon Phone at least had some lessons learned in hardware suppliers, etc... I realize they're throwing hardware out there prior to seeing what the software can do with it, but I think its necessary to get people locked in.

I like their approach from the business perspective. Give the people a voice controlled speaker. Give them a remote! Now, give them a voice-controlled camera! Now, give them a voice-controlled screen! Soon, give them <insert novel sensor> and let them go hands free! Rinse-repeat.

I was battling back and forth FOR A MONTH with their skill certification approval team for a skill update that would allow customers to call people by name, where in the first version it was only by phone number.

They would fail the certification because apparently people didn't know how to test, or used fake numbers to make phone calls and complained the call would not connect, or the certificate validation (that was working before) would fail, etc. All sorts of things. VERY frustrating process. I wouldn't make any change, submit the skill again for certification and get different results.

Now they announce their own calling feature, a week after finally approving our update.

It continues to surprise me how far ahead of them Apple is letting Amazon/Google get in this area. I've always been a big fan of Apple (despite their closed ecosystem), but have to admit that Amazon is seriously outplaying them on this front. Hopefully Apple surprises me and comes up with something even more innovative that can compete.

I feel like this entire product could be a Chromecast-esque dongle that connects to a TV. Having a personal dashboard would actually be quite useful, but this seems like they want to sell appliances not experiences.

Maybe they've gone with this form factor because of the 2x 2" speakers? But why would I want that when it could be plugged directly into my home audio setup?

Or maybe it's so they can include a touchscreen? But I thought the whole point was hands-free conversational interaction?

I guess I'm missing the point of this. Why would I, as a normal consumer, get this instead of a regular Amazon Echo?

People here are really missing the point... This isn't another ipad it's a different way of interacting. It's not just video message either, it's a new human interface for interacting with software. You can communicate with someone and get suggestions at the same time. Think conversing with a friend and having Alexa aid in the discussion.

Friend 1: Where do you want to go to the movies tonight? ..Friend 2: I dunno Alexa have any good suggestions?Alexa: Star Trek is playing x:00 at X theatre. Things of this nature.

Maybe that's just me, but based on the photos, this device looks quite ugly - which matters for a gadget that people put inside their homes, doesn't it? The "original" Echo has a futuristic design. This one feels more like created in 70s or 80s.

I'm not willing or interested enough to enable voice activation (Siri) on my phone or desktop, but thought Echo would be nice to have as a music player. The voice recognition is so reliable -- not just the NLP, but the mic array (unlike trying to activate Siri on the iPhone) -- that it's converted me to a true believer in voice interfaces, at least for simple tasks, such as playing music, turning on NPR, and activating timers and alarms. I do have the Fire stick connected to a projector but I've definitely longed for the ability to navigate YouTube or HBO on a tablet-like device with Alexa (again, not just the NLP, but the mic array, which Fire tablets don't have)

This seems like a nice step in that direction but I've been spoiled by the low cost of the Echo Dot, which when it's on sale is so cheap it can be a stocking stuffer. I don't think I could pay $229 for the first generation version of the Show, but will likely get its cheaper, more advanced iterations.

Great now the echo will record all video as well and "anonymize" it and use it to improve their systems. This class of devices are the most puzzling to me. People know their value proposition is to record everything but then keep buying them. I keep waiting for the day when the scales tip in favor of privacy but that never happens.

If they are going to enable calling, I sincerely hope they learn from the current phone spam and email spam mess and don't let just anyone call you at any time.

Ideally, you could authorize people to call you by giving each person/entity a different token that authorizes them to call you. Then if that person/entity sells the token to 3rd parties, you not only know who sold you out, but also you have the ability to revoke that token easily.

Amazon is killing it in IoT/Smart Home. However, IMHO, they are making a bit of mistake by not allowing developers to monetize their platform (at least the last time I checked). There were also certain device functions that apps could not utilize (e.g. programmatically mute and unmute). I suspect they'll have a wall garden approach to their new Echo devices too ... if this was open, they'd win it all (again, just my opinion).

The main thing that annoys me about Echo is that the knowledge graph is so poor. I can only choose from a limited amount of things to ask the damn thing, WikiPedia or start installing 3rd party skills.

I honestly think that with the use cases the Echo Show would be much more useful had the static structure has a rotating base, which allows the Echo Show to rotate to the source of voice command (disable-able via setting for privacy concern). That would allow ultimate versatile use for its screen to offer the same hands-free experience.

This was the direction I expected Apple to take prior to Jobs passing. It seemed the rumored Apple TV would combine Siri with traditional television. Apple faces serious threats across the entertainment spectrum, from content to device.

Everyone speculating on Apple acquisitions should be considering a Sony or LG buyout. I own stock in neither.

Eventually, with the internet of things, there will need to be a "home brain" type device to control all of the devices in your house. The company that holds that position of controlling what devices can work with others will have a lot of market power.

I developed this same thing 6 months ago. Setup and commands are a bit cumbersome due to being 3rd party but all you need is an Echo device and Android device with the Echo Sidekick app. Does everything the Show does except voice calls but you can send messages through Echo devices to other devices with the SideKick app. https://play.google.com/store/apps/details?id=com.renovotech...

Love the concept of the Echo, however i don't see too much value in a screen, as for most tasks you'd need that for it's usually worth the effort to pull out the phone since you are also not bound to a specific location.

This is more like an iPad with a better Siri. I guess talking to parents, watching child cams are the target audience for this. A device which sits in living room or bed room need not show me CNN in there.

Now they're much closer to solving 'smart home assistant' online shopping. Communication only via voice results in two uncomfortable options: either you're blindly believing that you'll get best price/option ("order xyz") or you may stuck in very slow listening of options (try to read search result list). That barrier will be stepped over with this little screen enhancing shopping experience, if needed.

If you can place multiple of these in a house and use them all together as an a/v intercom system, that'd be by a far killer feature. E.g. you can talk to your child who's in the basement or talk to coworker at another cubicle

The telescreen received and transmitted simultaneously. Any sound Winston made, above the level of a very low whisper, would be picked up by it; moreover, so long as he remained within the field of vision which the metal plaque commanded, he could be seen as well as heard. There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time. But at any rate they could plug into your wire whenever they wanted to.

You'd be surprised / scared / outraged if you knew how common this is. Any time you've been in a public place for the past few years, you've likely been watched, analysed and optimised for. Advertising in the physical world is just as scummy as it's online equivalent.

During 2010-2012, I was part of a startup called Clownfish Media. We basically created something very similar to this and got scary accurate results then. Given how accessible computer vision has become, the image in the tweet comes at no surprise to me.

Best part - we got a first gen raspberry pi to crunch all the data locally at 2-5fps. Gender, age group (child, youth, teen, young adult, middle age, senior), and approximate ethnicity were all recorded and logged. Everyone had a unique profile and could track people between cameras and days (underlying facial features do not change).

Next time you look at digital signage, just be aware that it is probably looking back at you.

Uh, I got sidetracked and brain hammered by the devolving discussion on that Twitter thread, thus couldn't find the context for this pizza shop kiosk - Is it a customer service portal that attempts to identify the person in front of it to try and match up with an order, or a plain advertising display that is trying to capture the demographics of the people who happen to stop in front of it and look at it?

The main driver for innovation and growth of the open web was that it is open. The diversity we got from that was tremendous. Anyone could make a website with whatever content they could think of and people all over the world could access it. There was some kind of "web neutrality" in browsers. They didn't prefer one site over the other. The second big boost came from extendability and customizability of browsers through extensions which gave people even more control over what they would consume. And on top of that, the browers became open source, a big win all around.

Opera is going the other way. They have a closed source browser that directly integrates some specific products into the browser. You can't completely get rid of them, you can't integrate another product in the same way and I'd guess an extension can't mess with these pre-packaged addons. It's nice that you can have a messenger in a sidebar next to your current browsing page. But it's taking control away from users and that's not the way forward in an open web. Why can't this be an extension? Why can't it have a feature that lets me run two tabs side by side so I can choose what products to use there?

In general I think this Opera Reborn is still far behind say Firefox or the official Chrome. Just now you can change the filter lists in their Adblocker something that adblocking extensions could do for years. Now you can change Operas theme, i could make Firefox look like whatever I wanted forever. For all the love that the old Opera deserved, I don't think Opera is the future of web browsing.

>desktops and laptops, while theoretically more powerful multitasking tools, have been left behind

>Browsing and chatting simultaneously is cumbersome and inefficient now, as you need to switch between tabs when responding to a message

Or, since we all have widescreen monitors (and often multiple monitors) you could just have your messengers in a window next to the browser instead of a sidebar within the browser. Seems like a solution looking for a problem. What good is allowing messengers to reside within your browser other than that it lets the people who are tracking your browsing habits simultaneously spy on your messages?

I'd like to try, but I'd probably not end up using it because it's not open source. I feel silly for rejecting a product based on that, but openness is important to me. Here's for hoping they open source it soon!

This reminds me of an interesting thing that happened when I uninstalled Opera VPN on my Android. After uninstalling it automatically opened up a browser to one of those "Tell us why you uninstalled" pages that you normally see on the desktop, this shouldn't be possible on Android.

I think they are doing this by having Opera Browser watch for uninstallation of Opera VPN and possibly vice-versa and when one detects the other has been uninstalled it launches the page, clever but annoying.

> Social messengers completely changed our lives, by allowing us to work, discover new things and communicate at the same time.

Nope.

> One of its novelties is the ability to seamlessly hop between discovering new content and chatting with friends, or even share online discoveries while browsing.

Happily I know no one that uses, or will use, this new version of Opera - I can imagine few things that would disrupt, annoy, and / or reduce my efficiency (or enjoyment) at my computer than being subject to even more random thoughts from the easily distracted.

I don't dare to use this since Opera got bought by that Chinese consortium of companies I'm not accustomed with. Not sure if I'm overly cautious? I'm thinking of especially Opera Link (which had a recent security breach to boot!). Wouldn't that leave out my passwords to China? And I do want to use some sort of syncing, I'd go crazy without anything there or some iffy third party addon.

It's not even just about a trust issue with Chinese company culture. It's that the Chinese government have ways of getting into even resisting companies at their whim that are judged alright in completely different ways than in Norwegian law. So; even if I trusted Opera Software, even if I trusted this consortium that I don't know, even then...?

Hey opera folks who might be reading this - work on simplifying offline access to web content. I want to be able to maintain ~100 GB of offline data store, full text searchable. Wikipedia, stackexchange\stackoverflow, khan academy, zealdocs and whole bunch of other sources of useful browser renderable web content provide dumps of their data.

And then they end up having to build spl apps and extensions and other garbage that work around and hack cross domain/local file policy, just so the content already on my disk can be read and searched by the browser.

When such a treasure trove of web content is accessible offline the browser can and should be the way to access it.

As the web becomes more and more a corporate sponsored attention sink strong offline support can be a valuable browser feature.

Opera used to be amazing, a few years ago it had a built in RSS, mail and IRC client, then even later (and right before the acquisition) they were still ahead with their UX by having a "Read Later" section to the browser and allowing you to easily customise and make folders in your "first page".

Are the FB Messenger, WhatsApp, and Telegram integrations official (i.e. known by, approved by, and/or supported by their respective companies)?

Asking because I can't find a reverse-shoutout to Opera on either Facebook's, WhatsApp's, or Telegram's blog. This is not altogether unusual (especially given the recency of the Opera announcement) but it still makes me curious.

Where to Opera get their money? Building and maintaining a browser seems like an expensive undertaking, and yet despite nobody I know using or even testing their websites on Opera, they've been actively developing and maintaining their browser for years.

I really like the concept of bringing i3-style tiling management to the browser space, but why is it specific to messengers? Since most well-designed websites work at any size, couldn't you just have a permanent tab at the side of the screen of, say, twitter, or some browser-based IRC client? Admittedly, I'd probably use facebook messenger as this tab anyway, so my needs are covered, but it seems like a missed opportunity. Obviously more technically-inclined individuals on more technically-inclined OS's have this already with proper tiling window managers but Windows and MacOS users probably don't want that system-wide and the browser would be the perfect place to introduce it.

Opera is now a combination of the Epic Privacy Browser and the Vivaldi Browser. They copied Epic's privacy features and in-built VPN/proxy and Vivaldi's social networking sidebars (which incidentally Opera pioneered and Vivaldi's founder is the Opera co-founder).

Or even better, they could have made the side-by-side feature work for any web pages at all as a more general layout feature, with some kind of easy access toolbar to allow you to pop them in and out as desired.

These moves that Opera now makes remind me a bit of how back in a day WinAmp didn't know how to fend off the competition, so they've started integrating tones of "extra functionality" out of sheer desperation... it never works

Funny for this to happen on the same week I switched my default browser from Firefox to Vivaldi [1]. That is really "Opera reborn", since it's built and owned by the original team of Opera developers - which is probably why it's so good.

how does opera makes money now?as per my understanding, pre iOS and android era, opera dealt directly with device manufactures and sold their browser as default on them.Since andriod, how are they even making money now?

I've used their browser since a year now, because it has better battery saving than Chrome and some other cool features, like detaching video windows, built in VPN. You can now even use all the chrome plugins with another plugin, in Opera.

Hey this google chrome skin has made some progress since last time I heard about it. It's still light years behind opera 12 and sadly will never regain what it lost.

If you're interested in the actual reborn opera experience, it's vivaldi you're looking for, opera co-founder and many for the original opera team is making vivaldi and though based on blink engine too, hence limited and unable to do exactly what opera used to, it has the same philosophy and vision.

The other alternative is the free software otter[1] browser aiming at recreating the opera 12 experience.

Great stuff, lines up a lot with what I'd care about as an ex-gamedev and spending a bit of time with Rust. One minor point:

> First class code hot-loading support would be a huge boon for game developers. The majority of game code is not particularly amenable to automated testing, and lots of iteration is done by playing the game itself to observe changes. Ive got something hacked up with dylib reloading, but it requires plenty of per-project boilerplate and some additional shenanigans to disable it in production builds.

Lua is a great fit here and interops with Rust(and just about everything else) very well.

I largely echo the sentiment of using Rust for game development. The world doesn't need another Flappy Bird clone but that's what I wrote because I ended up porting a Go version that was using SDL originally by Francesc Campoy from the Golang team: https://github.com/campoy/flappy-gopher

I was able to build the Rust version fast, and the SDL library is actually quite usable/stable for most things.

Flappy-Rust has particle effects, the beginnings of parallax-scrolling and basic collision detection. The rust code ended up being pretty reasonable however I'm quite sure there are a few places I could have simplified the sharing of assets.

It would be nice to have a minimal gameish program as a example for people learning Rust.

When I teach kids javascript I start with an etch-a-sketch. It's aided by a simple library to hide the mechanics of the HTML canvas element, context, etc. This allows it to be small enough that they can view it all in one go and build upon it.

There might be merit in writing one of these in every language (and a companion that uses the mouse), maybe placing it on github . With a really simple program like this you can focus on learning the language while making something. It's a tough job figuring out how to learn a language while simultaneoulsy learning how to write the boilerplate needed to get something onscreen.

Neat! I was just complaining about the lack of multiplatform Rust games proving out the concept. If only there were some consoles or handhelds on the list... although iOS/Android being on the list is somewhat encouraging.

The performance on my Android phone (Nexus 6) is not good. I would have thought Rust would be fast. The fps is fairly low, a lot lower than Maps or Chrome, at least when Maps isn't freezing up. And it seems the transitions between levels might be proportional to fps, because they are irritatingly long.

I have not tried out Rust yet, but couldn't this be solved by wrapping the float in a single-field struct that checks for NaN on construction, implements Ord using PartialOrd and otherwise passes everything through to the ordinary float inside?

It is an interesting educational project but staying away from Unreal or Unity3D is really a tough decision if your project is going to need some more features that are already developed and well tested in one of these engines.

Great writeup. Eye opening that you can actually write iOS apps with rust. w00t, totally trying this for my next app. (It's a bit overly bit expensive for my liking but otherwise I would have given your game a shot).

This is probably not a popular opinion here in HN but why does it matter what language you use to make your game in? You can use virtually any language to make a game. From my point of view the best language for a game is that which makes you the most productive for cranking out that code. And we all have our own personal preferences about which language is best, which I think is fine, you should code with the one that you feel most productive with. At the end of the day users will not be able to tell the difference. All that matters is whether your game is fun or not.

This is unambiguously a good thing. Jigsaw didn't solve the toughest problems it had hoped to solve, and there are much less impactful ways to accomplish some of these goals. As it stood, Jigsaw broke tons of applications that did non-trivial things with class loaders, reflection or even circular deps. As a consequence, while Jigsaw (after significant effort) might not be disastrous for Java, it's pretty awful for most things that aren't Java that target the JVM, like Clojure. Meanwhile, significant incremental engineering benefits in JDK9 were being held back by this sweeping change; so now maybe we can just get a better JVM without breaking the world.

As it stood, Jigsaw required a ton of engineering effort internally and externally and severely broke the backwards compatibility that has made Java such a workhorse of the enterprise. All of that at very questionable benefit to the end user: still no module-level versioning of deps. That doesn't mean this effort is totally wasted: the oldest parts of the stdlib were due for a touch-up.

Lord knows that I have my differences with how Red Hat operates sometimes, but I don't think suggesting their vote against Jigsaw is somehow a political plot to benefit OSGi or JBoss modules is reasonable. (FWIW: I also don't think OSGi's and JBoss' alternatives are great, but that's OK, because they're opt-in.) That theory has little explanatory power: why are all of these other companies voting against?

Disclaimer: I have no stake in any of the voting parties, but I do write a lot of JVM-targeting software.

Hi all, I'm the EC representative for the London Java community (~6000 Java developers, have lots of global outreach programmes etc) - here's our more detailed post on why we voted "No". https://londonjavacommunity.wordpress.com/2017/05/09/explana... - Happy to answer further questions although I've probably found this thread too late :-)

Interesting to see Twitter voting No as well, with the comment "Our main concern is that it is likely that this JSR will prove disruptive to Java developers, while ultimately not providing the benefits that would be expected of such a system".

I realize that there are some actual technical questions here but I can't help but be fascinated by the politics of this.

Are IBM and RedHat just against Jigsaw because of commercial interests in JBoss Modules and OSGI? Do they actually care about the technical merits? How can Oracle get to this point in the process without more buy in from the community?

Also I never knew who was part of the EC. Interesting breakdown on who voted for or against.

Will Java 9 go out without Jigsaw? I imagine if they have already modularized the JDK then it's impossible to release Java 9 without Jigsaw.

I have no idea how the JCP works, but I find it odd that Google doesn't have a vote in this process. They're clearly one of the biggest users and developers of Java in the world, even if you discount Android. Did they choose to abstain from this process, or were they kept out somehow?

Jigsaw would have helped thousands of developers. Yes it does not solve all use cases, but no none of the use cases are solved.

Killed for political reasons because OSGi/IBM/Redhat and money. What the OSGi evangelists would not understand is how Jigsaw does not aim to solve the same problems as OSGi.

Bonus: If OSGi was a good thing, most developers would use it after decades of existence compared to practically noone. Which proves that OSGi does not solve the problems of the mainstream developer, which means making Jigsaw into OSGi-NG is not the way to go.

I am afraid this is the death knell for the Java community process. I simply don't see Oracle taking out Jigsaw from Java 9 after investing so much effort into it. Also, a lot of folks were looking forward towards Jigsaw as a lightweight module system compared to heavyweight OSGi.

I like to think of the Java ecosystem as a giant elephant balancing on one leg, careening downhill on its squeaky little JVM roller skate.

I still remember that Java started as a toy language. And for all its security features and incremental improvements to JVM byte codes over the years, it still looks like a toy.

The classloader is the roller skate and the zillions of interoperating jars are the elephant. For the most part, the elephant stays up and continues blasting down that hill.

I used to think the scene was funny, but now I'm uncertain. Only through tools like Gradle and Maven have we papered over the inadequacies of the classloader system and gotten some control of it.

I had hopes that jigsaw would give the elephant an option of a little car to drive or at least a second skate to aid with balance. But safety trumps all, and such a huge breaking change should be avoided if we can.

can somebody provide a bit of background? I was looking forward to JDK 9 modules, but i didn't spent much time reading into it. I am quite surprised that it was voted down, since i think introducing modules is a step in the right direction. Also, judging from the comments, it seems that there will be a second vote?

Interesting to see NXP Semiconductors there. Being primarily a semiconductor manufacturer how do they use JVM based (Java etc) language ecosystem in their products? SDK to program their boards for development purposes?

I have little idea about cars (Java), but in carpet land (JS): we had the largest module ecosystem ever and it was pretty much ignored by TC39 in favor of a new solution, which has technical benefits (it's async), but there's no migration path from the current standard to the new anointed 'standard'. Sometimes technical committees just refuse to pave cowpaths. Look at the amount of lines it takes to do an XHR in the 'fetch' API vs superagent in, say, 2012.

Most no votes are only because they are concerned about their might with the JCP....This is a sad day for software development.

... is concerned about the lack of a healthy consensus among the members of the Expert Group From our point of view the lack of consensus inside the EG is a dangerous sign I understand IBM's and others reason for their "No" vote and heard many similar concerns by e.g. the OSGi community or contributors behind major build systems like Maven, Gradle or Ant. Most of their concerns are still unanswered by the EG or Spec Leads What we are especially concerned about however, is the lack of direct communication within the expert group We echo ... comments in that we absolutely recognize the tremendous achievements and the great work that has been carried out until now by the EG members as well as (and especially) by the Spec Lead himself.

policits will either delay Java 9 or completly kill it. some people just want to show their teeth. sad.

One possible partial explanation for this is the same reason why the Bill Gates Foundation wasted a bunch of money fostering small high schools. Smaller high schools were some of the best performing schools...but also some of the worst [0].

The answer is just that small counties have high variance. By chance some small counties will be a lot higher or lower than the national average.

I would be interested in seeing a Cox Proportional Hazards Model would show if the remaining changes are related to pollution, meth, economics, etc.

Not only county-to-county. My friend did his PhD dissertation on how (at least in Rochester, NY), life expectancy varies by decades between zip codes. His research focused on urban Food Deserts and studied how the lack of access to healthy food nearby restricted diets to that available in convenience stores (chips, soda, etc). I wish I had access to his dissertation, but he just defended a month ago and cannot find it in any publications right now.

A very similar article by many of the same authors was reported in JAMA in Dec 2016.

From JAMA Arch Int Med article from today, p. E6:"...At the same time, 74% of the variation was explained by behavioral and metabolic risk factors alone, while one marginally more variation was explained by socioeconomic and race/ethnicity factors, behavioral and metabolic risk factors, and health care factors combined."

From the WaPo article:"Mokdad said countries such as Australia are far ahead of the United States in delivering preventive care and trying to curb such harmful behaviors as smoking. Smoking, physical inactivity, obesity, high blood pressure these are preventable risk factors, Mokdad said."

In NYC, and not just Manhattan, New Yorkers are doing better because of a number of interventions initiated in 2001, when Mayor Bloomberg and Dr. Tom Frieden took over as Mayor and Health Commissioner.

Adult smoking is 14% in NYC, 24% in Louisiana. Raising the cost of tobacco contributes more than half the effect of getting smokers to quit and to stop teens from ever starting.

NYS tobacco tax is $4.35 per pack and the city is an additional $1.50. Cigarette sell for at least $12 per pack here.

This map seems correlated with socieconomic status and all the health implications that go along with that. And it looks similar again to Republican voting districts. It's Sarah Palin's "Real America", so to speak.

One thing the Democrats would do well to focus on is the fact that there's a large portion of the country that is sick, where the statistics look more like an underdeveloped country. Those of us who live in the major cities would do well to empathize with this other part of the country and their malaise, even if for our own sake of having a more sane and less partisan government.

I couldn't help but notice that the top three counties (Summit, Pitkin, and Eagle counties in CO) are largely empty space with some of the country's best ski resorts. I suspect that the life expectancy there may largely be driven by fit retirees who move there.

I really wish that journalists would learn some 3rd grade maths before writing anything with numbers in it. How does the life expectancy vary by "more than 20 years" when the difference between the counties with the highest (85) and lowest (67) life expectancies is 18 years?

Is there any reason this isn't just selection bias? Young, healthy, people with good careers travel to big cities. The rest stay behind. Do we have data to compare those who stuck around vs those who grew up there but moved away?

Heck I bet within a 5 mile radius it varies. Just go to east palo alto and then go to "not east" palo alto. The reasons I think are pretty obvious and almost certainly correlated with how affluent your neighbors are. Money basically solves all problems when it comes to wealth related problems like access to good schools and healthcare. Even though in theory healthcare and education should not be so strongly coupled to money.

The "life expectancy of a county" is not a trivial thing to define if you ask me. Is it how long children born now in a county can expect to live, no matter where they live or die, does it only depend on people who die in the county, or is it somehow weighted by the time people live in the county? Maybe there's a standard definition, but my guess is that most people don't know.

>We are falling behind our competitors in health. That is going to impact our productivity; thats going to take away our competitive edge when it comes to the economy, Mokdad said. What were doing right now is not working. We have to regroup.

What's the logic behind this? Out of curiosity. It's a morbid thing I hesitate to say, but from a purely utilitarian view isn't it better for a country from a macro perspective if people die as close as possible as they finish their working life and retire?

I might be completely off base there, and this is mostly a request for more information, not saying people should die early. As far as I'm concerned I hope we all live to 200.

Oglala Lakota County which is completely contained inside the Pine Ridge Indian Reservation is served by Indian Health Service with 94%+ of the population being eligible for free, government provided medical care. I notice, other than pointing it out on the map, they didn't discuss it in the article.

Of course life expectancy is going to vary by county, because the variables that contribute to life expectancy; income, lifestyles, dietary habits, quality of health care, and access to health care are vastly different.

Not surprised. Visited the Midwest a few years ago, and obesity was ridiculously off the charts compared to the Intermountain West or WA/CA. The are significant social factors playing out on a wide and complex scale.

My advice, if you're stuck in one an area with unhealthy habits, is to move to an area with healthy habits.

What about from country to country. I'm from the US and, at least where I live, it is rare to see someone smoking. Second hand smoke has basically become a thing of the past. But this past week we went up to Vancouver BC and we were shocked at how many people were smoking. It seemed like you couldn't go 10 feet without walking through another cloud of second-hand smoke. Apparently the anti-smoking campaigns of the US never made it up North ;-) Or maybe their socialized medicine makes it so that they don't have to worry about health consequences as much. I don't know. I just wonder if the life expectancy is lower in Canada (or at least Vancouver) since smoking seems to still be A-OK.

I create a throw away account and say this almost every time a post hints at something like this: It is commonly taught at the school of public health at Cal that "your zip code is a stronger indication of your life expectancy (and quality) than the color of your skin". This has been known for years now.

If there was no difference in behavior at all between all of the people of the USA, I think you'd still see pockets of more and less progress, just due to the natural distribution of dying. I'm curious what the expected variance by zip code would be if everyone's behavior was identical.

As the article states, some countries are making more progress than the USA on modifiable risks, such as smoking. Australia is one such country. If advertising health were as profitable as advertising vices, we'd be in better shape.

Last year when donating specifically to Thunderbird was made possible on mozilla.org, I donated to the project because it has provided a lot of value over the years.

Recently I started looking at the discussions on the tb-planning mailing list and it looks like we'll get a revamped (fully rewritten) Thunderbird. That sounds like a very long project to me - probably a few years just to bring it to what Thunderbird already provides today. Plus the extensions system needs to be revamped as well (similar to what's happening on the Firefox side with XUL ones going out). Getting Exchange calendaring done is also not a priority because of the complexity and the effort needed. So it looks like we will get a better maintainable product after some years. I'm not sure if that's going to appeal to many people to donate.

I'm happy with Thunderbird and some extensions that I use regularly, with the only exception being calendaring support for Exchange being very poor and unreliable (even with the Exchange EWS Provider extension or with external solutions like DavMail). Since I don't like taking risks with email client alpha or beta releases because of the fear of data loss (and with huge mailboxes, even detecting data loss would be a chore), I'll just stick with the current version and hope that the new revamped one comes in a stable form sooner (of course, I will donate periodically). I'm excited and afraid!

Day to day I use my email provider's regular web interface but I use Thunderbird every few months when I need to do a massive email cleanup - there is no other tool I'd rather use and it's indispensable to me for that purpose.

Also the article states, "In many ways, there is more need for independent and secure email than ever" and I agree 100%. Thank you to everyone who works on this project!

I use Thunderbird as my main email client and I have a bit of a love-hate relationship with it! I have a complicated email set up with 1000s of folders, and lots of mail accounts and filtering and by and large it does a great job.

I still use mutt when I really want an email powertool, but I can't use it as my daily email client any more (and haven't for years) now that HTML emails are so prevalent.

I use lots of plugins with Thunderbird (Copy Sent to Currrent, Enigmail, External Editor, Nostalgy, QuickFolders, Identity Chooser, Mail Redirect, ...) to try to bring back some of the functionality I'm used to with mutt and it works quite well now.

In recent months I find Thunderbird needs restarting once a day which is frustrating. It goes into some kind of internal loop processing an email and never returns. Probably a consequence of too many plugins!

>But there are still pain points build/release, localization, and divergent plans with respect to add-ons, to name a few. These are pain points for both Thunderbird and Firefox, and we obviously want them resolved. However, the Council feels these pain points would not be addressed by moving to TDF or SFC.

and then this

> We have come to the conclusion that a move to a non-Mozilla organization will be a major distraction to addressing technical issues and building a strong Thunderbird team. Also, while we hope to be independent from Gecko in the long term, it is in Thunderbirds interest to remain as close to Mozilla as possible to in the hope that it gives use better access to people who can help us plan for and sort through Gecko-driven incompatibilities.

So I'm not sure I fully understand their direction. Are they simply less focused on solving those issues right now?

I use Thunderbird but don't really have a horse in the race, I get what I need out of it and I support them, I'm just curious.

In my mind, there are a handful of things that are essential to getting Thunderbird back to a usable state... some of these could be plugins...

First, the exchange/calendar integration options clearly suck... establishing a clear calendar interface as a built-in with extensible points for plugins for authentication/sync of calendars would be a good start.

Second, likewise with calendar auth/sync would be an extensible interface for folder sync and authentication, so that a cleaner integration for common providers based on an underlying IMAP can be used... this way the conventional "junk, spam, inbox, sent" folders could be presented the correct way as well as the underlying storage for a given provider.

Also, along with calendar/email would be more extension points for scheduling, contacts, etc...

As it stands, even if there were different plugins for a google calendar and an exchange/o365 calendar, contacts, etc... if the underlying pieces can be shared, it would be a better user experience.

Moreover would be some serious reconsideration regarding the UI/UX... I'm a big fan of material design, but some variation on that coming a lot closer to a Gmail app for desktop would be a really nice start... but getting a calendar/task/contacts integration points and primitives for extension would go a long way here. Having the core UI going the same direction as Servo, and having most of the UI/Extensions being HTML/JS based would be nice.

Likewise, NPM compatibility for extensions' modules would be nice as well.

I use thunderbird because my work uses gmail. I guess (maybe) I could add the secondary work gmail account to my regular gmail. Not sure, the other alternative is to keep a separate incognito window or keep work email open in a separate browser.

Frankly, I think it would be easier to write a GUI over mutt than to rewrite Thunderbird.

I was a heavy user of Thunderbird, but after migrating to mutt, it's basically obsolete. The only pain point is really HTML, but converting it to text is Good Enough most of the time. Outlook-produced emails still look like crap, but I can click a button and open it on Firefox.

Mutt is not only alive and kicking, there's the new NeoMutt project that is the NeoVim for Mutt. We have initial Lua scripting capabilities now.

The Thunderbird Council is optimistic about the future. With the organizational question settled, we can focus on the technical challenges ahead. Thunderbird will remain a Gecko-based application at least in the midterm, but many of the technologies Thunderbird relies upon in that platform will one day no longer be supported. The long term plan is to migrate our code to web technologies

Mozilla dumps XUL tech from gecko left and right, removed proper "classic" mod support from Firefox... how is this a bright future. Thunderbird as a big XUL app is stuck with an soon to be not supported old gecko. And how is the plan to slowly rewrite it viable? Replication of the dated UI with HTML5 will be an even bigger clusterfxxk.

We need a proper open source offline client. And it should have a modern UI with at least conversation view like Gmail. Wasn't there a HTML5 based email client in FirefoxOS. Start with that code and set up a new Mozilla foundation funded offline email client, and keep security support for Thunderbird until the new email app is ready.

I hope The Document Foundation gets it to add to Lubreoffice to be a foss alternative to Outlook.

I get thousands of emails a day. I have several email accounts as I switch to a new one because the old one got a lot of spam. I email my friends and family my new email but they keep writing to my old emails. Message filter helps me sort stuff into different folders to find important emails and attachments.

No Excel? It's the most widely used visual, grid based functional programming environment. MS claim more than a billion Office users. Any time one of them edits a formula in Excel they're programming. Would also be nice to see AVS/Express in there. Jeff Vroom built the AVS VPE more than 20 years ago.

I threw my own hat into the ring with my undergraduate thesis, in which I wanted to create an editor for Elm which took advantage of all the niceties that strongly-typed functional reactive programming affords. I also took a closer look at some of the projects mentioned here (Light Table, Lamdu, Tangible Values, Scratch/Hopscotch, Inventing on Principle etc...). You can find its (now totally obsolete) remains here: https://github.com/lachenmayer/arrowsmith

The idea was basically: you should be able to use different editing UIs for values of different types. The lowest common denominator is code as text, but you should be able to switch on different editing UIs for numbers, colours, graphics, record types etc. The second half of this video shows off some demos for all of these: https://www.youtube.com/watch?v=csRaVagvx0M

Author here. This doc is just my personal curation of notable/interesting UIs for programming, as inspiration for my own research. Not attempting to be comprehensive, and being somewhat opinionated on the level of generality required to be called "programming". Thanks for all the links, I'll check them out.

I'm definitely in the "prefer to read" camp. A lot of these, especially the more graphical ones (that aren't just fancy syntax highlighting) seem way too disjointed for me. I don't want my code floating around in separate bubbles/windows, I want to see it in-context, and I want to be able to see a lot of it at once.

I do think that visual representations of code can be very helpful for analysis, especially if you can see the code running in the visual representation while you're debugging it. But for editing, I want the text view.

The other thing that looks really handy is the ability to use proper mathematical symbols, when that's appropriate for what you're doing. The point of those symbols, afterall, is to be a concise textual representation of the concepts. The problem is our input tools; a keyboard and mouse are really not suitable for a written language developed for hand-held pens and paper. That might change soon; the latest touch screens might make it possible to have a notebook-sized pad with very-high-precision pen input as a common third input device, next to our keyboards and mice. If that becomes common, languages and IDEs that make use of it might become common as well.

One of my formative experiences with programming er, programs, was 'The Quill' (https://en.wikipedia.org/wiki/The_Quill) for the ZX Spectrum. Perhaps not a true programming environment but I have fond memories of writing an adventure called 'The Jewels of Zenagon.' Fun times :)

Another programming UI: I have been designing a graphic based programming language: xol. In xol code is represented with graphical elements instead of text. A prototype of the first version (partially working code editor) can be seen here: https://github.com/lignixz/xra9

There is already a second version design, 2x better looking, not revealed yet. I also have some additional ideas that may improve the language further.

Thanks Jonathan, that is just excellent again. I, these days, dedicated myself to get myself out of my chair (I program standing up for around 7 years now but that's not enough getting older); my goal is being able to walk while programming. And so many interfaces you have here can never even aspire to do that. I finally am at a point with something (after 10 years of having this same goal behind me and about 40 failed attempts with prototypes) that I can say it is not slower than sitting down while walking (I am well aware this is also related to getting older; I am capable of doing far more in my head than I was 20 years ago). And it is interesting to see how all these interfaces depend on your being stationary and some not even only stationary but stationary in front of a quite massive screen (visual programming often needs rather large surfaces for anything trivial).

Nice to see Epics Unreal Engine 4 blueprints in there. Right now I am heavily relying on blueprints instead of c++ as I wait for some features to mature, and the interface is really growing on me. I just find it easier and faster tot prototype in.

Mid 90s, I spent some time trying to make a "structured editor" for VRML-97, where the scene graph and the textual representation were linked (two way editing). I didn't get very far. I'm glad others continue to work on these ideas.

This was kind of depressing. It seems like almost everything aside from IPython and Mathematica in this list is still worshiping at the altar of PARC. That, and two of the most common UIs for GSD (Vim and Emacs) aren't even on the list.

On a personal note, I'm particularly hostile to the Squeak/LabView style of flowchart programming. Once you get something even slightly non-trivial going, you can just feel your life force being drained by all of the scrolling and zooming and dragging and futzing.

Hey, ceo/cofounder of Repl.it here. Was pleasantly surprised to see this on HN! React Native and Expo has taken the world of mobile development by storm and we're happy to play a part in spreading this amazing technology.

Many of you might know us from being one of first in-browser REPLs (for 30+ programming languages https://repl.it/languages). Our mission is to make programming more accessible and that's why, more recently, we've been also working on tools for educators wanting to teach programming. Our Classroom product (https://repl.it/classrooms) makes it easy for anyone to teach programming online and in physical classrooms.

Is it possible to support creating and deploying Minecraft server mods using repl.it?

At one point (2-3 years ago) my son was interested in doing this but the pain of installing Java, an IDE (NetBeans), getting started in Java programming and deploying it on a vanilla Minecraft server was just too much.

Im not American, but Ive been hearing about your health system for a several years. Ironically, I know more about it than my own countrys (Ireland).

Several years ago, there seemed to be a lot of talk about how much The US spends (private & public) per capita on health. Its a lot more than everywhere else. This was usually presented in the context of the health care regime A UK-esque system, a Swiss-like system, etc.

Lately, that comparison seems to come up less. Obama-care, Trump-care or Bernie-care would mostly deal with who pays & how, not how much.

The who pays question is a favourite ideological one so politicians and commentators are comfortable with it. But, I think the how much question is probably the more important one, and the harder one to solve. If the US could get costs down to average European rates, then Im sure a workable system could be found within the confines of most ideological frameworks.

The problem is that getting costs down is almost impossible. Costs are salaries of doctors & nurses, a giant pharmaceutical industry, thousands of radiologists, ultrasound technicians, the machines they use (far more frequently than europeans)

Getting costs down to EU levels would be mean the medical industry shrinks like manufacturing shrunk two generations ago.

I dont have a solution to suggest, but I do suggest toning down the ideological discussion. The problem is more of a technical one.

There are many, many problems with healthcare in the US. Off the top of my head, the big ones are:

* Endless number of middlemen and administrators. * Every player in the healthcare chain benefits from higher prices.* No price transparency. * Tacit collusion is rampant.* "Cost no object" mentality to treating the dying.

The last one, while insensitive, is true nonetheless, and it's alarming that over 50% of all healthcare spending takes place in the last two years of a person's life. We have basically decided that it's okay to spend literally any sum of money on a dying person in order to prolong life by an average of a few months. And the problematic word there is average, because some people do live a lot longer, and that's what we all look to. I realize this is grim and seemingly lacks humanity, but unfortunately that doesn't make it not true. Charlie Munger, who is on the board of Kaiser Permanente, said this same thing yesterday..."over-treatment of the dying" was the biggest problem they faced.

It's reminiscent of our approach to college education - justified at any cost. So we push millions of kids into a schooling system that's not right for them, and the result is a lot of crappy education, worthless degrees, student loans, etc. Once we flip the switch to "there is no price you can put on _____" things get sideways FAST.

The middleman role of insurance companies in American healthcare seems completely useless. They're not serving patients, doctors nor the national economy by siphoning off enormous profits from the 17% of GDP that gets spent on healthcare.

Getting rid of them would be extremely hard, of course, given how well entrenched they are thanks to lobbying and regulatory capture.

Insurance companies are a massive-scale version of the car dealerships that have managed to keep Tesla out of many US states by taking advantage of local legislation -- nobody would want to deal with a car salesman or an insurance company given the choice.

In the US the health care cost since 95 went from 13.1 to 17.1 while in Germany they went from 9.4. to 11.3. It is actually way worse than the article tells if one considers the age structure of the two countries where Germany has 21.7% over 65 vs. the US with 15.25%.

The non tangible cost are also non negligible. There is friction in the job market as changing job risks incurring a potentially catastrophic coverage gap. There are bizarre industries focused on renegotiating issued medical bills, collecting those or managing the health related bankruptcies.

Pricing of pharmaceutical usually generally defies the laws of gravity as the incentives of regulators, suppliers, distributors, doctors and insurers have been distorted beyond anything resembling a fair playing field. In such an environment playing games is superior than providing value and adhering to generally accepted rules. When it comes to pricing the costs of providing the service is often the least important input.

Steve Balmer recently: If you look at these tax deductions for employer-provided health [...], theyre really subsidies to the affluent, which I guess I hadnt thought about them.

The biggest problem society faces at the moment is the vanishing middle class and lower qualified jobs that are still providing enough to subside. For the latter the cost of food, shelter, fuel and health are key. Lower the cost of living and there will be more jobs that are worth taking.

If you go back to 1960 or thereabouts, corporate taxes were about 4 percent of G.D.P., Mr. Buffett said. I mean, they bounced around some. And now, theyre about 2 percent of G.D.P.

By contrast, he said, while tax rates have fallen as a share of gross domestic product, health care costs have ballooned. About 50 years ago, he said, health care was 5 percent of G.D.P., and now its about 17 percent.

Good news is that the amount of health care spending is a choice. E.g. other western countries are running a health care cost to GDP ratio from 10%-13%, sometimes offering vastly superior and equitable outcomes than in the US.

The bad news is that this is a choice Congress and Senate are taking on behalf of the American people. And with the partisan divide and lack of agreement on fundamental values, things won't really change.

Add in a rapidly aging population, and being cognizant of the per capita health care spending steeply increasing at later stages of life, kicking the can down the road won't make the later adjustment any easier...

The real problem with health care is that it's a gravy train for all involved. Doctors, who don't invent anything new and who just practice garden-variety medicine are wildly overpaid. They don't like it be known, but the average doctor earns a quarter-million a year. Totally unjustified.

Then there are the medical device manufacturers, big Pharma and the hospitals. They all are getting rich off the current system. That's what needs to change.

Healthcare costs create huge problems across the economy, increasing the cost of everything from manufacturing to higher education.

Between myself and my employer, it costs about $20,000 to insure my family a year. My employer shares much of the cost breakdown and it's interesting how goes to prescriptions and how much of that goes to specialty drugs to keep a handful of people alive.

$500 million dollar spend on healthcare.

$120 million goes to prescriptions.

$40 million of that went to specialty drugs, representing 1.7% of the prescriptions.

"The average ingredient cost of a single-source brand prescription increased by 14.9% in 2016 to an average $745 per prescription, mainly driven by high-cost specialty drugs. The average ingredient cost of multiple-source brand prescription increased by 49.5% to an average $585 per prescription. The average ingredient cost for a generic prescription decreased by 10.9% to an average cost of $34.04 per prescription. "

By this argument, we should also be examining why we spend so much on education. In 2010, the United States spent 7.3 percent of its gross domestic product on education, compared with the 6.3 percent average of other OECD countries.

Surely spending dramatically higher amounts than other countries, with no better effects, is enough to drive us to consider how we can reduce the costs of education - and should make us think long and hard before considering proposals that we should throw even more money at this.

It's surely true that having a well-educated workforce improves productivity, but it's also true that having a healthy workforce does the same. I'm having trouble finding much difference between the two examples.

I really think health care costs in the US are a byproduct of Americans obsession with convenience. Most lifestyles can be lived without any physical activity - the whole country is designed around the car.

Traveling around a bit I've seen other cultures still require people to walk somewhat to get places and people will also just go on "walks" whereas Americans will go for a "drive"

Food culture is also responsible, just jamming food into your face as quickly as possible rather than enjoying a meal is for sure and American thing.

Add all this up and you get 60% obesity rates in adults and getting worse.

There are no sane constraints on the prices. Instead of "price is what the market will bear," it's "the market will bear whatever price." This creates an irrational drain on the rest of the system. Whoever is sucking on that drain is doing well, though.

My grandfather used to get a shot that was $12,000 a pop, and didn't do anything.

I had kind of hoped that Trump would stumble into single payer health care as a solution for healthcare needs of his base and would somehow get it passed through congress with support from democrats. Alas, nothing of that sort seems likely.

Looking from the outside, it seems to me that the root of the problem is not about the health care in US, but about the prices of health related services and products? Prices are so inflated, hospital and medication bills are huge compared to what the same things costs in Europe or elsewhere.

And entirely ignored seems to be the demand side of the question, why is that? Could expenses be higher than we want if we are less healthy that we should be? I see a lot of unhealthy habits associated with the subsequent associated costly interventions required. Just because we see no path forward to affect demand, leaving it out of the discussion will ensure the debate is framed only as which system can provide that volume of healthcare for a little more or a little less.

Would it be a worthwhile idea to open up health care internationally? Maybe insurance companies could create global standards for medical procedures, so that clients could choose in which countries they want to perform a procedure and then receive or pay the difference with regards to the cost of a national procedure. This could introduce some level of competition without jeopardizing quality - or am I missing something?

The pharma industry should open up; it is dominated by corporatism and their monopoly patents driving up prices, which make individual spending on drugs rise till an unaffordable level for the lower income.

Health care costs would be lower, if more people were able to provide health care service. If the world focused less on rate my sandwich apps, and more on fixing humans, the prices would be much more affordable.

You're talking about an org that, if your address is in the middle of a swamp, they will send someone by boat to find you.

They take collection and processing very seriously.

If you gave a startup 200 million or whatever, you'd have a pretty accurate census of internet connected people in the top 20 cities in the US. Oh, and a declaration of victory from the startup, plus 300 billion in market cap in the hope that they may actually be able to count everyone someday!

Somehow, i doubt you'd come up with something that can collect data well, know where to be focusing field representatives, etc with 600k+ field representatives in a reasonable and efficient manner.

Has anyone here tried to organize people,targets, and data in say a company twice the size of all of IBM?How did that go?:)

(the census is honestly relatively cheap. it costs about 50 bucks per person, total. Obviously, counting rural, etc areas is the majority of the cost)

The 2020 Census is going to be weird, too.Disadvantaged minorities, etc are statistically less likely to be counted[1], and especially given the climate in the US, counting immigrants is going to be especially hard.

[1] Which is why in the US, they democrats often try to pass rules around using statistical methods, and the republicans claim it requires actual enumeration

I'm curious what caused the cost of the new electronic system to increase so much. I understand there could be a lot to it on the back end to ensure privacy and anonymity of the data collected, but it doesn't seem like it should be a huge deal technically. We're talking about ~325 million people and collecting demographic info and address info [1]. The IRS has far far more variables to collect info on, however they have fewer people. 325 million people is nothing compared to scaled companies like Facebook, Amazon, or Google (far more data points per person, far more people).

Any speculation on cause or other considerations I'm missing? Did a quick search on Google News and didn't find anything. All the companies I listed have huge teams, but I am still not seeing how the cost has exploded.

A bit unrelated but the US Census should really work with the US Postal Service when performing the census. It would save the Census bureau some money and would provide the Postal Service with an additional source of funding.

650 million dollars to tally some simple demographic information? I understand there is always more to the story - but this doesn't pass the smell test. Additionally, what does the Census Bureau need 1.5 billion for on a non-census year?

I don't mean that rhetorically - 1.5 billion a year pays 15,000 people an annual salary of 100k. Where is that money going?

It's ridiculous this is the top trending submission when the president fired the FBI director that was investigating his ties to Russia. If political stories are fair game, this is important but a rounding error next to Comey.

@dang, the HN ranking system is so trivially hackable via downvotes and flags by a motivated minority.

The above comment is an excellent example of political gaslighting, there is ample, undeniable evidence of some sort of GOP collision with Russia. No sane person looking at the evidence could come to any other conclusion, but by confidently stating the exact opposite of the truth the above commenter seeks to sow doubt in the mind of a potentially disinterested or confused audience. This is a an increasingly common tactic on these boards.

Honestly, why is it so expensive? If you created a startup with $50 Million, and gave them 4 years to implement this system, I'm sure it would be done much more efficiently. Then you "buy" the startup for $200 Million at the end of it, and all the employees get a nice payout.

Why hire nearly a million temporary census takers[1] every ten years? Even private companies have databases with far more detailed and more frequent demographic and psychographic information on every person in the United States.

Hidden in the notes at the bottom is a pretty useful improvement to 'git stash':

> 'git stash save' now accepts pathspecs. You can use this to create a stash of part of your working tree, which is handy when picking apart changes to turn into clean commits.

I believe there may be a slight error in the GitHub blog post I quoted above: from what I can tell, it's actually the 'git stash push' command that now accepts pathspecs. But either way, still a neat new feature!

> git branch, git tag, and git for-each-ref all learned the --no-contains option to match their existing --contains option. This can let you ask which tags or branches don't have a particular bug (or bugfix).

I'm surprised that didn't exist already. Several years ago, I worked on a tool to scan SVN merge history and save in a graph database so one could ask this type of question, "Does this branch contain the fix?". Or the opposite, "Which branches do not contain this fix?".

It was a mess because there were 8 million commits in the repo and clients ranged from SVN 1.4 to SVN 1.8 (the server was upgraded too).

It would have made more sense to use git for something like that but it's hard to get thousands of devs to switch.

- Feed every run of k words into a convolutional layer producing an output, repeat this process 6 layers deep (section 3.2).

- Decide on which input word is most important for the "current" output word (aka attention, section 3.3).

- The most important word is decoded into the target language (section 3.1 again).

You repeat this process with every word as the "current" word. The critical insight of using this mechanism over an RNN is that you can do this repetition in parallel because each "current" word does not depend on any of the previous ones.

As far I understood it, Facebook put lots of research into optimizing a certain type of neural network (CNN), while everyone else is using another type called RNN. Up until now, CNN was faster but less accurate. However FB has progressed CNN to the point where it can compete in accuracy, particularly in speech recognition. And most importantly, they are releasing the source code and papers. Does that sound right?

In this work Convolution Neural Nets (spatial models that have a weakly ordered context, as opposed to Recurrent Neural Nets which are sequential models that have a strongly ordered context) are demonstrated here to achieve State of the Art results in Machine Translation.

It seems the combination of gated linear units / residual connections / attention was the key to bringing this architecture to State of the Art.

It's worth noting that previously the QRNN and ByteNet architectures have used Convolutional Neural Nets for machine translation also. IIRC, those models performed well on small tasks but were not able to best SotA performance on larger benchmark tasks.

I believe it is almost always more desirable to encode a sequence using a CNN if possible as many operations are embarrassingly parallel!

It's a strange thing, but almost all new database technologies seem to leave search as an afterthought for some later day instead of starting on day one with the assumption that "it's all about search".

A database system that doesn't support rich search capabilities is restricted to very limited types of applications.

Often search is left unimplemented for years, or perhaps never implemented.

From the intro page[1]... Many of the descriptions comparing to NoSQL are wrong. There are plenty of NoSQL options that have similar features, though it isn't universal, it can and often is there. Cassandra, for example, probably does just as well in multi-zone/dc concurrency. Consistency options are also similarly tunable. Cockroach 1.0 was announced earlier as well.

It's not that I don't appreciate the option. This seems far closer to what DocumentDB should have been earlier on. Though tbh, I think Storage Tables are already pretty useful.

It seems like a "just throw all your data in this" kind of database, probably intended for everything but core application relational data (so, good for analytics, messaging, etc).

It sounds like the atom-record-sequence model at the heart of it is pretty key, but there's not a lot in the article about what that is and how it works. Is this a well-understood data structure used elsewhere?

The project seems very ambitious, and I could see it being used pretty heavily at a lot of companies. Thoughts?

These tensor cores sound exotic:"Each Tensor Core performs 64 floating point FMA mixed-precision operations per clock (FP16 multiply and FP32 accumulate) and 8 Tensor Cores in an SM perform a total of 1024 floating point operations per clock. This is a dramatic 8X increase in throughput for deep learning applications per SM compared to Pascal GP100 using standard FP32 operations, resulting in a total 12X increase in throughput for the Volta V100 GPU compared to the Pascal P100 GPU. Tensor Cores operate on FP16 input data with FP32 accumulation. The FP16 multiply results in a full precision result that is accumulated in FP32 operations with the other products in a given dot product for a 4x4x4 matrix multiply,"Curious to see how the ML groups and others take to this. Certainly ML and other GPGPU usage has helped Nvidia climb in value. I wonder if Nvidia saw the writing on the wall so to speak with Google releasing their specialty hardware called the Tensor hardware that Nvidia decided to use it in their branding as well.

More great hardware being stuck behind proprietary CUDA when OpenCL is the thing they should be helping with. Once again proprietary lock in that will result in inflexibility and digital blow-back in the long run. Yes I understand OpenCL has some issues and CUDA tends to be a bit easier and less buggy, but that doesn't detract from the principles of my statement.

Wow, this is just Nvidia running laps around themselves at this point. Xenon Phi still not competitive, AMD focused on the consumer space, looks like the future of training hardware (and maybe even inferencing) belongs to Nvidia. (Disclosure: I am and have been long Nvidia since I found out cudnn existed and how far ahead it was)

I actually wrote a House Bill for Montana in 2013 that would have been a pretty comprehensive privacy law for the state. My friend Dan tried his best to get it passed, but it was a bit too much new code for legislators to stomach. Thankfully Dan has broken the original legislation down into smaller parts and has really succeeded in improving privacy in Montana.

I think the article touches upon a key problem: even if some people are in principle willing to sacrifice some privacy in order to get a product for free, it should be required to state what data is shared with whom in clear human language (and not in a 20 page wall of legalese).

The relation between the user and a service is now completely asymmetrical: it is hard to know what your data is used for. It does not help that the legalese often boils down to 'you will sell your soul'.

"Facebook revoked users ability to remain unsearchable on the site; meanwhile, its chief executive, Mark Zuckerberg, was buying up four houses surrounding his Palo Alto home to preserve his own privacy. Sean Spicer, the White House press secretary, has defended President Trumps secretive meetings at his personal golf clubs, saying he is entitled to a bit of privacy,

That said, privacy is being commoditized for everyone as well with tools such as Snapchat, the Epic Privacy Browser and TOR.

Last month, the true cost of Unroll.me was revealed: The service is owned by the market-research firm Slice Intelligence, and according to a report in The Times, while Unroll.me is cleaning up users inboxes, its also rifling through their trash. When Slice found digital ride receipts from Lyft in some users accounts, it sold the anonymized data off to Lyfts ride-hailing rival, Uber.