I once tried to purposely destroy an Anker MicroUSB cable (bought a 5 pack that had three 3ft, one 6ft, one 1ft, didn't need the 1ft), by grabbing the head of it with clamping pliers and yanking the fuck out of it with the cable wrapped around my hand, and bending it to near 180 degrees repeatedly for about 5 minutes.

I could not kill it without straight up taking wire cutters to it, which I decided not to do.

The cable still works perfectly, and sits in my junk drawer for when I need to plug something into the front of my case and leave the thing sitting on top of the computer.

Literally any other brand? Most I would have been able to rip the head off the cable after the first few yanks, the few that survived that would have broken the wire pairs inside either from that or from the repeated 180 degree bend.

Remember those original Monoprice USB cables we all bought and loved back in the day? Those didn't survive that abuse, and Monoprice cables have only gotten worse then. And the AmazonBasics USB cables (that I recommend for non-MicroUSB cables (only because Anker doesn't make them, only MicroUSB, Type C, and Lightning))? Don't survive that level of abuse either.

Anker also has the best damned USB chargers for walls and cars I've ever seen (heaviest USB chargers I've ever had, no coil whine, doesn't EZ-Bake itself, shuts off completely when nothing is plugged in), they're just perfectly glorious.

The average consumer has no idea if one of these cables will actually do what they proclaim. The only way to really know is when other customers review the product, which we know is barely useful most of the time due to fraudulent reviews. Great job on the part of this Google engineer for writing such clear and technical reviews of the products.

Now that an expert has called some of these cables out as not adhering to the spec shouldn't they be taken off of Amazon? Should they be reprimanded for false advertisement?

This is cool, but being from the online marketing world, I can already see people copying this in bad ways. "Hi this is Tom, I am from iPhone Development Team 6 here at Apple and we have been testing [product] in our labs. In our opinion, [affiliate link to other releated product] is far better because this product [technical critique that few will understand but will serve to bolster reviewer's credibility, can be copied from random highly technical review site]".

If this is going to be a thing, Amazon should offer the ability to certify that the user at least has an email address for the company they claim they are with (@apple.com, @google.com etc) and show a "certified Googler" or similar logo next to their name.

The current rhetoric when it comes to the failings of online reviews is to point at fraudulent reviews. I posit that, rather, it's the lack of weighting reviews by the expertise of the reviewers. Clearly the reviews of a hardware engineer at Google should have more weight to them than an average consumer (on products like this), because as bkmartin points out, the average consumer has no idea as to the true quality of a product. Perhaps the cable does work for them. Perhaps the cable mostly works, but the wire gauge is under spec and 1 out of 100 people end up with a fire. That won't be reflected in the overall rating of a product.

Case in point, the Juiced Systems product that Benson reviewed as 2 stars is currently listed as an overall 5/5 stars. In Amazon's defence Benson's review is currently displayed at the top of the reviews. So any shoppers that go through the effort of reading the reviews will quickly see it.

But this product shouldn't be rated 5/5 overall, and because it is, many consumers who can't be bothered to check the content of the reviews will be burned.

This, to me, is a greater problem than fraudulent reviews. Fraudulent reviews can be solved by getting more regular customers to review products they buy. Uber is a great example of UX design that gets customers to review the quality of their service nearly every time. But even if Amazon or other retailers achieve a higher rate of reviews by actual costumers, the non-expert bias will remain.

Of course the trick is, how do we determine which reviewers are experts? Most review systems have a helpfulness rating on each review which could be used to weight reviews in the overall average. But that's only a proxy for an expert rating, is easily cheated, and it's harder to get customers to rate reviews than it is to get them to at least review the product.

These reviews took me back (and made me laugh). They have the same to-the-point-and-technically-correct tone as code reviews internal to Google.

In all seriousness though, these are important for a non-obvious reason: Nexus 6x devices don't ship with USB2-USB3 adapters (they only have a USB3 cable with a wall charger).

So ... anyone who wants to hook up their Nexus 6x to a laptop will almost certainly to Amazon to buy a cable. This is what I did last week, and just today I discovered that certain cables do not quick charge.

The most shocking is that some vendors or manufacturers are blindly criticizing and rejecting Benson's reviews, completely ignoring his technical analysis. This makes me never want to purchase anything from "Cable Savage" for example:

Benson I can say with confidence this cable has been tested and can provide up to 3AMPs of power. Since you are supposedly on the google team, then you will know that the Pixels original OEM charger actually has an output of 5 AMPS. (What 3rd party charger were you using to attempt to charge a pixel with a 3AMP cable, when the pixel demands 5amp output?

These cables have been confirmed and work great with the Nexus 5x and 6P. It even can replace the 12" macbooks charging cable with a proper walll charger. The power output is relied on the actual wall charger. The cable with be able to provide ample power given that the wall charger is correctly powered. Also please note, that batteries charge at different mAh intervals depending on the percent full the battery is at.

Our cables can charge the 5x and 6P units at the same speeds as the stock oem cables.

"""

Benson's counterreply explains that 3 A is actually a violation of the spec, and that the Pixel charger never outputs 5 A, etc.

Ports/devices with older connectors (USBA, mini-B, micro-B, B) aren't necessarily going to be able to (or supposed to be able to) handle a 3A current draw.

A device that supports USBC should handle current draws this high. But to do so, it must be able to determine if it is safe to draw this much current. That's only doable with a USBC->USBC connection as per spec. So a USBC-USBA cable (or any other USBC->something cable) should use a resistor identifying that the USB3 current draw spec should not be exceeded.

The cables he's reviewing misidentify as being capable of the full current load. This could cause damage to the device your USBC phone or laptop is being plugged into.

We've done a similar analysis of USB 3.0 host controllers and the state of their drivers cross-platform. It's amazing how bad the state of USB 3.0 is, even 5 years after devices started shipping. Even the latest Intel host controllers (8 & 9 series) initially shipped with bad drivers. I look forward to getting our results posted.

So many companies have a very short list of "approved" compatible devices and get ornery if you use something else; often those approved devices are unavailable under that exact product name or number and so you're left guessing as you replace stuff in two, three, or four years, which is actually when you're likely to find yourself needing to replace components like this. The very worst companies (ahem, Apple) only validate their own devices for inter-operation with their products, and act as though using something else would be the pinnacle of recklessness. Some even threaten it'll void the warranty (which is actually not legal in many cases, but companies do it anyway).

The fact that someone from Google is doing this in a non-Google location for the benefit of the standard and of their customers is just really nice. Certainly, it is in their best interest to have their devices behave reliably and not be damaged by shoddy third party devices; and so Google does benefit. And, yet, many companies just don't see it that way and are willing to screw over their customers just to get that extra few bucks after the sale for accessories that could just as readily be provided by third parties (and often third parties offer more variety, making them more fit for purpose in some cases).

In short, I've lost a lot of trust for Google over the years (mostly as marketing and violation of privacy became their primary revenue sources), but this is a good thing, for the consumer, for the industry (standards are great! implementing standards poorly is bad!), and for Google.

Looks like Google is doing an amazing service for customers in this area.

Not only they are using a standard plug and thus not locking you into buying accessories from them, which seem reasonably priced anyway, but they even test a lot of other cables in the market and tell you how compatible they are both with their own products and even with the products of competitors with quite different practices.

I worked a little with Benson. He's a fantastic guy with a penchant to veer into epic rants, which he seems to have kept in check on these reviews. Our loss!

Anyways he is a kernel software engineer and surely has some assistance (or at least double checking) from the resident EEs. This feels like a bottom-up initiative, showcasing how Google can empower its engineers. Great stuff.

I wonder though: from the few reviews I've looked at, it seems that the main complaint is that the cable does not identify itself as a legacy connector, and can result in the device drawing too much power from the port, potentially damaging whatever is on the other end.

Definitely an unfortunate result, BUT: I wonder, is it still a common concern when the device on the other end is (for example) a wall plug-to-USB connector? Because if it's not too much of an issue in that case, and the device on the USB-C end is not at risk, wouldn't a non-compliant cable result in faster charging?

Related: has anyone written reviews of car USB ports, what devices they're capable of charging, and workarounds for charging higher-current devices? Many car USB ports have failure modes similar to those documented in these reviews, and they can't actually deliver high current for charging high-end smartphones, let alone tablets and laptops.

"USB" and the various USB logos are trademarks of the USB-IF. I was under the impression that non-compliant products can be sued for trademark infringement by the USB-IF. Sure, they could be sold without the logo, but not being able to put the word "USB" in their product description would make them unsearchable and kill their online sales...

I bought the TechMatte 2x USB-C to Micro-USB item that he gave two stars for use with my 5X. At least one of the items I got was actually defective. The other worked but charged very slowly. I wish I had seen his post before buying!

Now I am nervous because I subsequently bought a cable that he hasn't reviewed yet...

This has me wondering if there is some way to incentivize good reviewers. There is Consumer Reports and some more tech-specific sites but there is still a long tail of products that you have to buy without good reviews.

Google screws up cables as well. My Chromecast (V1) came with a dud cable that will provide some charge to some devices, but won't power or charge anything I've tried it on. So my Chromecast is now powered courtesy of Lenovo.

It's a statistic though, just as anything mass-produced. But I found his comment on the quality of 3rd party manufactured cables vs Google and Apple slightly amusing. Apple cables in my household and everywhere I've come across them are no prize pigs. Samsung Galaxy provided cables on the other hand have proved to be exceptionally durable in my experience. Nokia cables and chargers also seem to be made out of solid Kevlar or some flexible Titanium, seeing that every single one I've owned since the late '90s is still working. My blue LED modded Nokia 3310 is still going strong, battery and all, but that's another story.

I just bought an HDMI "hub"/duplicator, and just now realized that now all HDMI cables are the same - some support 1.3, others 1.4 (or were these the numbers and such).

Maybe it's okay, so far the Wii/PS4 and ChromeCast work fine through it... I'm just posting this here, as no one informed me about possibly complications at BestBuy, but the great folks at Staples actually forwarded me to BestBuy to look for such device.

My father told me a story about service in Turkey, basically if you go to a store and they don't have right away what you need, they may go out find/buy it for you, just to please you. Or buy you a coffee, while waiting in s shoe shop.

Whether he exaggerates a bit or not, I like this approach. Yes you might lose customers, but you'll always leave a smile in good customers. They might come back, just for that little appreciation...

If only there were more of this stuff! The kind of information in his reviews is technically detailed but clear enough for the layperson, and it answers the real question I have in mind when I'm looking at the 1-3 star reviews: does it actually work for what I want it to do?

Benson is doing God's work. Because of him I was able to purchase a USB-C to USB-A cable (the iOrange cable he reviewed), and I learned that Google's own marketing materials were functionally incorrect, which led users to expect 3.0A charging out of the cable from Google (which is out of stock currently), when it technically only charges at 2.4A (if I understand things correctly). This in turn led people to (incorrectly) poorly review the non-Google cables.

I miss a simple page with information om how I can perform tests, like on the USB-C cables, myself. I find reading specs to take to much time to be able know how to perform them.Example is the 57MB zip-file for the USB-C-spec: http://www.usb.org/developers/docs/

One of his reviews brought up an interesting point. Version 3.1 of the spec does not limit the length of a cable, but version 2.0 does. Is a 4 conductor cable with a USB-C connector on one end only required to meet the USB 3.1 standards?

Now you know why apply made their own plugs and cables. You can't trust third part manufacturers. Google probably gets lots of complains about devices not charging properly, when it is the cable at fault.

You wouldn't get Apple doing this for third party lightening cables .. yeah I know it would be like them giving away their own lunch, but it further underlines that lightening is just a money-spinner not any real technical advance.

So uh, maybe they can explain why the DisplayPort USB-C adapter for the Chromebook Pixel only works in one orientation? Yes, that's right. You heard me. You have to flip it over 50% of the time to get the screen to come on. (This might be Linux specific, I don't have ChromeOS to check)

This is just one side of an ancient debate: If component A doesn't work with component B, where is the problem? The manufacturer of A blames B, and vice-versa. (The truth is that they are simply incompatible and often it's nobody's fault.)

If many USB-C cables don't work with Google devices, where is the problem? It suits Google to say the problem is with the cables, but maybe creating pressure for better cables or shifting consumer attention to them is just a less expensive solution than altering the devices.

Regarding the violated specifications: 1) In the real world, nobody meets all published specifications and both sides can point to violations. Have you ever built anything that met all specifications? (I laughed a little when I wrote that) 2) If you design your product to work only with other systems that meet all published specifications, your product sucks. If your car only runs safely on roads that meet all specs, your customers are going to die. (Maybe there are exceptions when engineering critical systems components in things like airplanes, but I doubt it.)

That doesn't make our Google engineer wrong; it just means we don't know (unless you know what realistic performance specs are for USB-C cables). However, if in the real world it's hard to find USB cables that work with Google devices then, ipso facto, there's something wrong with the devices.

According to 18.37.3 and 4, microorganisms cannot be excluded from patentability. I assume this is to allow patenting of things like probiotics. However, all humans rely their skin, mouth, and gut floras to be healthy. If the bacteria and yeast in that flora can't be excluded from patentability, are they considered not a part of the human animal? I understand that probiotics should be protected, but I wonder if someone could take advantage of this and claim patent on any naturally occurring microorganism by just isolating it and showing that it has some use.

Something else not specified in this section are viruses. Viruses are not strictly microorganisms, and no mention is made of them, but yet they can be manufactured and used for treatments- recently even for cancer:

If viruses could be excluded from patentability since they aren't mentioned, then any research or manufacturing done would not be patentable, and therefore some companies may hesitate to invest too heavily in research.

Article 18.76: Special Requirements related to Border Measures 5. Each Party shall provide that its competent authorities may initiate border measures ex officio [118] with respect to goods under customs control that are: (a) imported; (b) destined for export; (c) in transit, and that are suspected of being counterfeit trademark goods or pirated copyright goods. [118] For greater certainty, ex officio action does not require a formal complaint from a third party or right holder.

I suspect it will take a while for all of this text to be digested and for people much smarter than me to find a lot of nasty stuff in there.

Meanwhile, I found the New Zealand and US side letter amusing:

>To the extent contemplated in the Code, New Zealand shall not permit the sale of anyproduct as Bourbon Whiskey or Tennessee Whiskey, unless it has been manufactured inthe United States according to the laws of the United States governing the manufacture of Bourbon Whiskey and Tennessee Whiskey and the product complies with all applicableregulations of the United States for the sale or export as Bourbon Whiskey or Tennessee Whiskey.

I assume this must be quite prevalent in New Zealand if they wrote a letter specific to this one issue. I didn't see a similar letter with France regarding cognac or champagne.

New Zealand copyright laws extended from 50 to 70 years with grandfather clause. While it aligns with the Mickey Mouse clause used by other countries, it goes against the original intent to benefit society as people can't extend or make use of copyright works for an extra 20 years.

There is also provision to unlock DVDs purchased overseas that is still retained.

I could not see provision to restrict tax free havens or to curb tax avoidance by multinational corporations.

>Article 14.17: No Party shall require the transfer of, or access to, source code of software owned by a person of another Party, as a condition for the import, distribution, sale or use of such software, or of products containing such software, in its territory.

In 19.1 on labour, party nations are required to "adopt and maintain in its statutes and regulations" certain rights, including

- "freedom of association"

- "a prohibition on the worst forms of child labour"

- "the elimination of discrimination in respect of employment and occupation"

- "acceptable conditions of work with respect to minimum wages, hours of work, and occupational safety and health."

Sounds nice, but there is absolutely no guidance on what types of regulations these incredibly subjective "rights" would require, and I imagine every party will interpret them differently (and probably conclude that their existing regulations already provide all of these guarantees.)

"The most shocking revelation from todays release is how the TPP's Investment chapter defines "intellectual property" as an asset that can be subject to the investor-state dispute settlement (ISDS) process. What this means is that companies could sue any of the TPP nations for introducing rules that they allege harm their right to exploit their copyright interestssuch as new rights to use copyrighted works for some public interest purpose. A good example of this might be a country wishing to limit civil penalties for copyright infringement of orphan works ...

... the E-Commerce chapter has the next most serious ramifications for users ... it restricts the use of data localization laws, which are laws that require companies to host servers within a countrys borders, or prohibit them from transferring certain data overseas ... The E-Commerce chapter ... imposes a strict test that such measures must not amount to arbitrary or unjustifiable discrimination or a disguised restriction on tradea test that would be applied by an investment court, not by a data protection authority or human rights tribunal."

EFF wrote previously about conflict between TPP and US Copyright Office efforts to improve the situation with Orphan Works, https://www.eff.org/deeplinks/2015/08/users-ustr-dont-sign-a..., "... the Register of Copyrights acknowledges a need to do something about the fact that "orphan works are a frustration, a liability risk, and a major cause of gridlock in the digital marketplace." The report includes a discussion of several proposals that could expand access to orphan works. One proposal is to put limits on the legal consequences for those who do anything technically infringing, in order to make it less daunting to take a chance and use them."

1. The Parties shall endeavour to cooperate on promoting transparent and reasonable rates for international mobile roaming services that can help promote the growth of trade among the Parties and enhance consumer welfare.

I would suggest that while reading this and forming an opinion, you take all the consequences of the treaty into account.

In something this large there will definitely be points that are objectionable, but that doesn't mean TPP as a whole isn't good for the parties involved.

Reaching agreement between governments that span the globe and govern hundreds of millions of citizens requires a lot of horse-trading and compromise. The final agreement can still be good for the world, even if there are objectionable provisions.

I've heard many folks complain about TPP on the premise that it will destroy and/or degrade American jobs. I believe there is lots of truth to that--people in other countries are usually willing to work for less than Americans.

Even so, TPP will help job-hungry people in other countries (at least slightly) by dumping more jobs into their job markets. So, if we're going to help Americans by ditching TPP, we're going to do so at the expense of people in other countries.

Is that the right trade? Helping Americans by hurting others? Maybe it is.

We were told that the draft says that ISP's are liable for copyright violations on their network or something. Is that in the final document? Anybody knows enough legal language in order to clarify this issue?

Also does the document say anything about terms/expiration of copyright? How does it treat creative commons?

Is there a reason why the US are negotiating separately a transpacific and transatlantic trade agreement ? The fact that, except for the US, no one knows what will go in both agreement until the end of the negotiation seems pretty odd.

When decisions are irreversibly taken that will bind the future of citizens without their consents it has to be named by its real name: autoritarism.

Technocracy boils down to aristocracy by diploma (eventually related to birth) instead of pure aristocracy. Still, the people of the nations should be the one edicting the laws. And if we mandate people to do laws in our name it is not acceptable that our consent is not sought by debating.

Modern so called democracies are only democracies by name in this case.

I might not the only one thinking that governments are losing their legitimacy. I am apt for being called under the flag and in case my government call me to defend their values/regim... I will not.

The social contract has been broken between modern governments and citizens. The social contract does not bind me anymore since the other party is not respecting the term of the contract.

>Article 1.1: Establishment of a Free Trade AreaThe Parties to this Agreement, consistent with Article XXIV of GATT 1994 and ArticleV of GATS, hereby establish a free trade area in accordance with the provisions of thisAgreement.

Begin with a bunch of undefined acronyms that apparently pull in tons of other shit not written down in this paper.

Obviously, some economists like globalization and "levelling the playing field". However, it will probably be recognized as a fashion and fad soon enough. (How far do economists think ahead? 3 months or what?)

I wish there was a similar level and quality of resources for what I think are called lifestyle businesses. By that, I mean product-based businesses with at most a few million in revenue, a 5ish person team, a solid sustainable market position, and no desire to revolutionize any unicorns.

I know a lot of people attempting this and they mostly seem to be flying under the radar, or at least have nowhere near the cachet of a startup. They are often bootstrapped, frequently for lack of other options.

For those of us who don't want to be in the pressure cooker or are turned off by the hype machine, these businesses are a viable alternative route toward independence and possibly achieving a significant impact. The fact that they have become as attainable as they are is I think also something quite remarkable.

I have a really hard time buying a lot of this about how novelty and monopoly are keys to a successful company. If you look at the really successful startups, especially unicorns, almost none of them are actually monopolies or new ideas. The vast, vast majority are old products done right, and almost all of them have very substantial competition. This attitude is, in fact, encompassed in the near footnote-like section entitled "Competition." That section sums it all up: success is determined with obsessively improving the company. Competition isn't what kills, it is failure to keep on improving.

There is the list of current Unicorns. I can't think of one of them that doesn't have very, very substantial competition or is a genuinely novel product. All of them ventured into highly competitive, well-worn fields and what set them apart was quality of service, ease of use, and responsivity. Very few of them made conceptual leaps in the underlying product - they mostly made leaps in lowering activity energy to use products or solving associated logistical problems.

Sorry Sam, I have to politely disagree with you on this one. Lord knows you are the one with the resume and authority on this, but I am a startup lawyer and work with clients on this stuff all day, so I am not totally unqualified. I do defer to your judgment, obviously, on companies that you want to fund, and your track record more than speaks for itself. However, what I want to know is if there isn't some disconnect between the companies you do fund and the attitude that is expressed in this post. I would love to hear your insights or opinions on whether you feel that I have this wrong, and if you think that the next generation of unicorns are going to be novel monopolies, or that maybe I am misreading the characteristics of current successful startups.

I feel like this needs a "when to give up" section. Like at some point it's obvious right. When is that point. Should you just destroy your entire life and never give up no matter how long since it's been since you've had steady growth? I've seen startups struggle for years with average growth or really slow linear growth where real profitability was years away. Do they just continue? Is that the advice? What about their employees who are being paid less than market value?

> an even bigger problem is that once you have a startup you have to hurry to come up with an idea, and because its already an official company the idea cant be too crazy. You end up with plausible sounding but derivative ideas. This is the danger of pivots.

A great point, that I haven't seen in too many places. I sometimes feel like we're seeing too many people who "want to have a startup" for the supposed fame and fortune, and not enough who are truly passionate about an idea. Believing in an idea will get you through, not dreams of gold coins.

Sam, I noticed you didn't mention watching cash burn or unit economics. Is that a later section you might add? Too many founders don't realize the importance of that until it's too late (speaking from personal experience)

> On the other hand, starting a startup is not in fact very risky to your careerif youre really good at technology, there will be job opportunities if you fail. Most people are very bad at evaluating risk.

It's true that career risk is low, but opportunity cost could be high. If you're well into your career, taking a few years off to work at a startup that might fail could really be a million dollar tradeoff.

So you really gotta believe in your startup.

(Note: I left my comfortable high paying job earlier this year to start a startup)

"We once tried an experiment where we funded a bunch of promising founding teams with no ideas in the hopes they would land on a promising idea after we funded them.

All of them failed."

I was under the impression Reddit fell within this category. I recall a PG quote that went something like "we [Y combinator] hate your idea, but we like you [Alexis and Steve]" in reference to reddit's initial YC funding.

I know Reddit isn't considered a smashing success by VC standards (originally sold for roughly 15-20 MM), but I certainly wouldn't call it a failure.

I would love to see more content and discussion around this. I understand clearly why it is important to pitch who you will be not who you are when recruiting and raising money but when it comes to day to day, doesn't this contradict what you had said earlier about sharing all of the good and bad with your employees? Replacing your water jugs with Kool-Aid at the office just seems evil to me. I'm not sure if that is really what you are saying or not but it seems synonymous with unicorn culture. I see a lot of positives to creating a culture masked with illusion, but in my head, all of the value seems short-term.

It's also a more attractive and credible way to introduce fast-growing technology startups* to the uninitiated than just saying, "oh, just read this bunch of blog posts by this guy called 'pg' who you've never heard of, no, no, trust me, it's reaaally good, he founded YC, which you've also never heard of, but trust me, they're like, the real deal".

*YC-style/SV-type, even if some of the advice is more general, and the definition of 'YC-style' is quickly expanding and loosing meaning.

Okay, I've been researching and reaching out to various entities for, well, going on a couple years now regarding my "start-up" concept of a company that creates some inventions, brings to market, and licenses technology as another revenue stream. Yes, it's not a software-oriented business, but it's a viable entity with multiple prospects. Thus, the following line doesn't really ring true to me:

>One important counterintuitive reason for this is that its easier to do something new and hard than something derivative and easy. People will want to help you and join you if its the former; they will not if its the latter.

In every single instance where I actually get a response, there's a consistent chorus of "this doesn't fit the model of what we support" and, basically, I chalk it up to an investment environment that actually, truly targets the derivative and easy moreso than the unique and difficult.

That's why I'm still slogging along in the self-directed patent process. Nobody is interested in helping (beyond some constructive comments I've received here from community members - thank you!), and certainly not in contributing financial backing. It is what it is...but that claim? I don't really buy it.

"One important counterintuitive reason for this is that its easier to do something new and hard than something derivative and easy. People will want to help you and join you if its the former; they will not if its the latter." So Google didn't invent the Internet Search it was a derivative but not easy. Facebook didn't invent the SocialNetwork and actually made it easier than e.g. MySpace. Amazon was not the first online retailer but made it better, more reliable. WhatsApp? we knew how to send textmessages for a while but well they made it more convient. So a good idea must not be something unique new but something which makes the product better than the rest.

If this is something that is expected to be around for awhile I recommend talking to an editor. Friends are good to check for overall content and typos, but they are too familiar with the subject to check how well you convey something and too nice to really tear through the text objectively.

Nice essay. IMHO the contents are (1)Common, good advice. (2) Good data fromSam's excellent experience. (3) Some notso good advice and attitudes.

Scope:

The essay seems aimed mostly at currentSilicon Valley (SV) style, mostlyconsumer, information technology startups.Okay, but that's not all of business oreven all of startups. Yes, YC is pursuingmuch more, e.g., some shoe company inPakistan, still, the essay is SV styleand there, mostly Web and mobile. Finewith me, because that's what I'm doing, bewe should understand this point aboutscope. And, as below, we shouldunderstand this scope and style becauseIMHO it's time for SV style of consumerInternet to borrow from some of the restof technology and business.

Broad Point:

As we all can see, all heard from MarkAndreessen, etc., and see from Sam,

" ... investors returns are dominated bythe big successes, ... "

The Exceptional:

So, from this Broad Point, the goal issomething exceptional. From that, wehave to suspect that we won't always befollowing the common and ordinary, someextensive experience and observations fromthe past, or even "big successes" from thepast, and, instead, should be willing toconsider some exceptions in order to beexceptional.

Users' Love:

> "Your goal as a startup is to make something users love."

Yup.

Now, can we, please, have some moreguidance on just how the heck to do that?And, please, don't ask me to draw fromSnapchat or Homejoy. And I'm concernedabout

"However, these statistics also reveal agrimmer reality: 93 percent of the 511companies accepted by Y-Combinator havefailed. Even more alarming, only 3 to 5percent of the companies that apply toY-Combinator are even accepted, meaningthat only one in every 200 companies thatapplies to Y.C. eventually succeeds."

(1) Find a problem that huge number ofpotential users/customers believe or cancome to believe is really important tohave solved. E.g., want, say, 1+ billionpeople with Internet access, if only froma smartphone.

(2) Find a solution that is much betterthan anything else and difficult toduplicate or equal.

(3) Make sure that for the target customersright away or soon the solution will beseen as a must have and not just a niceto have. Want no doubts here; do notwant to have to depend on gossip and fadsfrom notoriously flighty teenage girls.One of the best examples would be a singlepill, safe, effective, cheap, to cure anycancer.

(8) Be a solo founder until at least $10million a year in after tax earnings.

Difficulty:

> "A word of warning about choosing tostart a startup: It sucks! One of the mostconsistent pieces of feedback we get fromYC founders is its harder than they couldhave ever imagined, because they didnthave a framework for the sort of work andintensity a startup entails."

There's something wrong here: All acrossthe US, east to west, north to south,cross roads to the largest cities,millions of sole proprietors do startupsand are successful enough to buy houses,support families, and get the childrenthrough college.

All the larger bodies of water in andaround the US have boats and yachts, andnearly all the owners are just suchentrepreneurs. Maybe they own 10 fastfood restaurants, are big in asphaltpaving, are a manufacturer'srepresentative for some great lines, runfive new car dealerships, own and manage2000 units of rental property, have aprivate label line of industrial floorcleaning supplies in a mid-size Midwesternstate, did a rollup of dry cleaningstores, are the main beer distributor forhalf of a state, are a leader in designand construction of custom tanks on truckframes for hauling liquids, etc.

But, a startup that exploits informationtechnology, software, Moore's law, and theInternet should have some advantages andgenerally be less difficult.

Idea:

> "Remember that at least a thousand peoplehave every great idea."

Maybe true with SV style, but moregenerally, no, and a thousand times no.

Instead, since so many startups fail, wewant some advantages and definitely canget a lot of advantage from having agenuinely new idea. People who wrote aPh.D. dissertation that was supposed to be"an original contribution to knowledgeworthy of publication" and "new, correct,and significant" will quickly appreciatethe importance of a unique idea and a lotabout how to construct such. Here SVstyle is seriously lacking and, as above,needs to borrow from outside.

As in Sam's

> "Remember that at least a thousand peoplehave every great idea."

I believe that SV style nearly trivializesthe idea and, to raise the success rate,very much needs to go much deeper into theidea and associated considerations of userneed, market size, meeting the user need,and new, proprietary technology to meetthe user need especially well with aproduct difficult to duplicate or equal,and protected intellectual property thatsupplies a barrier to entry. In someplaces such unique intellectual propertyis taken very seriously, with laws,contracts, national securityclassification, etc. For higher successrates, SV style needs to do better withsuch intellectual property.

Several good examples of a suchintellectual property and its power are inthe picture

Once again, over again, one more time, yetagain, we come to the issue of team.Again we learn that it's tough to get agood co-founder; co-founder disputes are amajor cause of startup failure; it's toughto hire good staff; it's difficult to keepthe staff well involved; being a goodleader and manager and learning to do sois a lot of work, and BoD members rarelyknow much about the details of thebusiness, likely much less if some new,unique, powerful, valuable crucial, coretechnology is key to the business.

So, with all those clear dangers to thestartup, we begin to conclude: Be a solofounder, get to earnings ASAP, groworganically, well into very goodprofitability hire no one, and from thestart carefully plan never but never toaccept equity funding or report to a BoD.Or, follow the example of Markus Frind andhis romantic match making Web site Plentyof Fish, initially, just one guy, two oldDell servers, ads just from Google, $10million a year in revenue, and recentlysold for $575 million in cash.

Understand the Users:

> "... its critical you understand yourusers ..."

Right. And for making, say, really niceseat cushions for the driver's seats ofRolls Dropheads with owners in the Chablisand Brie set in the Hamptons, sure.

But in consumer Internet, for a bigsuccess, there will millions, maybebillions of users, and about all that canbe said about those users is that they area not very special cross section ofhumanity. So, really just have tounderstand the pair of the product and theordinary man on the street.

I'm so freaking happy that someone finally affirmed my feelings that maybe, just maybe, I don't need to start my own startup with the notion of "Unicorn or Bust". I've just felt wrong since graduating college 3 years ago, unable to motivate myself to hack outside of work, and this finally captures why I've felt so tired. I am tired of feeling like I need a Unicorn idea to justify working on something outside of my job. I want to work for myself, but it just hasn't felt possible without a plan to "Take over the world". I don't want to take over the world. I want to build something that people use and can sustain me. That's it. But for every idea, there are millions of reasons in the back of my head that stop me from doing it, all boiling down to "Well I just won't be able able to grow this as a startup".

I don't really care about growing something as a startup. I don't need to revolutionize anything. I just want to make someone's day better through software. I want to launch a cool product that people find fun, silly, useful, critical to their process, whatever you want, and NOT be beholden to interests of anyone who isn't involved in the daily operations of whatever product that is.

I just want to build something, and make it better every day. Something that I own, that I can change however I want, whenever I want. I shouldn't have felt like I needed a16z to invest in my company to believe that my product is worth something.

edit: This isn't to pass judgment on the merits of the article, which seems a bit conveniently timed to a product launch linked at the bottom. But as a community we should realize what gets flagged/pushed down and by how much.

final update: at 19:24 GMT, this is off the front page at position 33. Age 5 hours, 536 points, 112 comments. In position 18 is a story from 10 hours ago, 362 points, 192 comments (https://news.ycombinator.com/item?id=10505362); the 10506372 story referenced previously is now in position 31, 62 points, 16 comments, 5 hours ago.

People find themselves drawn to the unlikely startup success stories because they have a beautiful allure to them: if I spend every night tucked away writing code and focusing my entire life around my work, then maybe I, too, can be the next Zuckerburg!

Even though this narrative is hugely removed from the realities of startup life (most college startups go nowhere, the most successful founders tend to be in their middle age with significant financial stability, etc), it is still romanticized by founders and startup employees and other people who really should know better. So then why do they buy it?

To get people to work harder for you. Spending every waking moment in front of a customer or a computer screen eating bulk ramen sounds like a great montage scene in the movie you'll have an EP credit in, and this distorted reality is even easier to sell to impressionable young college grads who have maybe .1% of the equity (in options!) that founders and VCs get to keep.

Why are jeans and hoodies the fashion choice of founders? Because if everyone is used to dressing like a poor college student, they tend not to notice how little they're truly being compensated.

This isn't some big founder/VC conspiracy, it's complicit common sense.

I'm eagerly awaiting the forthcoming geek poet who'll write our Howl. Who saw the best minds of our generation destroyed by startups... their skulls bashed open by a sphinx of capital... that moloch "whose mind is pure machinery... whose soul is electricity and banks!"

This is brilliant. The one thing it doesn't go into is the harm that will be done to healthy ventures when the bubble pops. Suddenly, everyone will have a bad taste and healthy ventures won't be able to get sane necessary resources and bridge loans, all thanks to the excesses of what's described in the piece. Not engaging in the insane startup culture described isn't just a healthy lifestyle choice, it's also taking a stand against a situation that will soon very much harm the tech industry as a whole. When the tide rushes away from the unicorn machine, it will carry away the innocent as well.

Most of the time, "disrupt" is a euphemism for "put a Web 2.0 interface on". You're not disrupting shit, you're putting whitewash on a shoddily-cobbled-together product that other people have been doing better for decades. It's all style and no substance.

If you're doing something new, you'll probably own the industry just by virtue of being there first. But you'll also probably not grow to be super-huge, because if you were solving a huge problem, it probably would have been solved already.

The exception to this is when the problem is huge, but the technology wasn't there to solve it before: see search (Google), social media (Facebook), mobile apps (Uber). And even then the conditions need to be just right (an app for calling cabs would probably not have been nearly as successful if it didn't coincide with a legal loophole that allows them to undercut the taxi industry).

But most ideas aren't those ideas. And that's okay. A business that solves a real problem, even if it isn't a huge problem, will likely be able to stay alive, profit, and grow just fine. Those "disrupting" companies that are all style and no substance will crash and burn when the tech bubble pops.

However, DHH's constant railing against the VC backed world seems a little tiresome. There seems to be a religious fervor to his essays that thier way is morally better (e.g. thier business model seems like a honest transaction vs VC backed startups who inflate numbers etc).

I think a lot of people get that raising VC is not the only way to build a business (there is even a nice tradeoff statement (do you want to be Rich Vs King).

It's a choice one makes. It's not morally inferior or superior to raise VCs or to bootstrap.

I think DHH (and a lot of people) miss out on a lot of modestly growing startups that are doing relatively boring things because they don't hear about them. It's not that they don't exist. They just don't hear about them because they're relatively boring.

The vast majority of (moderately successful) startups I see friends and colleagues starting or working for are in this group. For every one friend I see go work for a company like Uber, I see 10 more working for a startup that builds software for expense reports or HR teams or insurance companies or old school cab companies. They will never be billion dollar companies, and they probably won't IPO, and they may be an acquisition target -- but that certainly isn't their end goal. They are gaining customers and growing relatively sustainably, making smart choices about when to (or not) take outside funding.

"I wanted a life beyond work. Hobbies, family, and intellectual stimulation and pursuits beyond Hacker News, what the next-next-next JavaScript framework looks like, and how we can optimize our signup funnel."

I don't think early investors dubbed themselves "Angels". If I recall correctly it came from the term used for similar people who used to finance Broadway shows. I like to think some ingenue in the 1920s called some rich guy "angel" and got him to fund a show and it went from there. I should Google it but I miss speculating on things...

>Part of the problem seems to be that nobody these days is content to merely put their dent in the universe. No, they have to fucking own the universe. Its not enough to be in the market, they have to dominate it. Its not enough to serve customers, they have to capture them.

I think this says more about the state of VC than startups themselves. Founders feel that if they don't run around banging pots and pans while tooting their own horn/vuvuzela, they won't ever get any attention from investors. Crowdsourcing early funding is just going to make this worse.

However, I personally feel distain for the perspective of how every engineer should maintain a noble sense of worth. No matter the environmental differences of being in SF or elsewhere, people are bad at engineering wall to wall. While there are still successes at both sides of the coin.

Frankly, 'Software eating the world' has nothing to do with us. It has to do with, well, the world. And my own struggle with the tech industry is how disconnected we are in the 'startup and grow' sector.

There are millions of small businesses (in the US) and the vast majority of them are just lifestyle businesses that bring freedom/pleasure/excitement to the owner with perhaps a possibility of earning more, but there are many that earn less than what they could make at a regular job with their same experience.

This whole world domination startup thing seems to be localized to SV really. Nowhere else is this considered normal.

By the way, there's a lot of confusion with just the term "startup" in general when it should really be reserved for something new (biz model, innovation, etc). If you're just doing something that's already been proven, which is completely ok, then it's just a small business.

Eloquently expressed pov as we've come to expect from @dhh. Legend. Now what I'd love to hear is from founders of @stripe and @intercom as they have both lived the experience of developing 'lifestyle' scale businesses before bringing unicorns to the market

I find interesting that this article, that got 541 points (at the time I am writing this) is out of the front page already. Makes me wonder if there is the hand of people (VCs) not wanting programmers to aim for lifestyle business behind that.

They are everywhere... but we are quieter than high-growth startups. Many of us work for them. We enjoy it. If anyone was not aware of their presence, step outside of the VC-driven culture. There is a whole other world out there.

This post really resonates with me. I think there is a "silent majority" type of situation going on. Most people venturing out do it for the reasons DDH states: independence, the ability to work with people of your choosing, etc.

Most of us don't care about winning some ridiculous lottery where smarmy Wall Street types stakehorse tech wunderkinds.

At the end I pictured David, standing on stage, stared at by the audience. After a brief moment of silence, he drops the microphone. A loud crack echoes through the PA, followed by screeching feedback, before the audio engineer remembers to turn down the volume. Meanwhile David turns around, calmly stepping down the stage to be detained by the (Ge)StartupPolice

The reality is you can't accomplish anything meaningful in life without being uncertain. It is OK to be unsure. It is OK to do things that are not sexy. It is OK to make small incremental improvements.

Unicorns happen over time. Smart founders have patience and resilience to weather many storms ahead.

> In the abstract, economic sense, a 30% chance of making $1M is as good as a 3% chance of making $30M is as good as a 0.3% chance at making $300M

I see this repeated as a truism all the time by the anti-VC crowd and it sounds great. But is there actually any evidence whatsoever of it?

The success rate for startups which have raised a Series A is substantially higher than the success rate for startups and small businesses in general. If it were true that avoiding VC funding somehow gave me a 30% chance of building a $1M business, I'd be happy to give it a shot (at least for a year or two). But I just don't see any evidence of that.

If anything, it seems like companies which accept VC money have dramatically better odds of success than other startups. The only reason it seems like VC has a high failure rate is that nobody bothers to write a news article when a random small business fails.

What a well-written piece. I agree that there is a lot of room for sustainable, technology based businesses. These are businesses that can optionally be run remotely and don't need to limit themselves to the bay area and its insane startup culture. They can work sane hours and be sustainable for their employees. They can provide products of real value. In fact, this is exactly the type of business that I'm interested in, the only kind with a success profile that's not tantamount to the success of playing the lottery, and the only kind where one can hope to stay in control and "be one's own boss."

These types of businesses are very much startups, but have a more traditional philosophy and generally plan to stick around for longer than a few years. I'd say such businesses are often a lot riskier for the owners as they're generally risking their own money and time to get it developed rather than someone else's money. Spending someone else's money is not risky at all. Failure in a silicon valley startup isn't a real loss: it's expected.

The people who venture out on their own and take their own risk with their own capital and time should be applauded for trying to create sustainable businesses that might be beneficial to the wider economy and society rather than creating ones that try to dominate a market for a couple of years and then almost inevitably fade out (as most startups do both before and even after IPO), not really adding much to the economy or society at all, while, of course, screaming the obligatory "I will change the world" mantra. In fact, it's this idiotic mantra and the lies one must tell oneself to actually believe it that turns off a lot of great talent form the silicon valley startup version of a business. Most smart people can eventually see through such simple, repeated, dogmatic ideas easily, and don't like to be associated with the brainwashed masses for whom these ideas are reality.

I like this post, but we should contextualize it properly, and look at where it doesn't work. Things have developed over the past forty years which don't allow the "start small, stay small" to always be a possibility: the increase in winner-take-all markets and the Superstar effect. We see it everywhere when it comes to today's job markets, and we also see it (and potentially worry about it being the case, and this is critical) in industries themselves. This latter belief means that if you decide to start something, you may need to consider whether you should bother at all if you aren't going to go big.

We see it in attention more generally (which has second order effects, like everyoneuse just one or a handful of large platforms(!) and where we call it variations of "winning in the Attention Economy"): https://en.wikipedia.org/wiki/Attention_economy

So the choice is sometimes (perhaps even often today) not between "get big" or "stay small/medium", but get big (where big may represent firm size, level of knowledge/skill, fame, or a number of other attributes depending on the area) or "get (almost) nothing." When the distribution of customers/eyeballs/rewards are as lopsided as they are in many areas, the only choice IS "get big or go home."

I don't knock dhh, and this is one of those posts I actually want to agree with, but it doesn't neatly comport with extant realities. I think even this advice, just like the advice to "get big" needs to be taken very carefully. All of these roads entail risk (obviously), but the choice of big versus small isn't as simple as implied.

This prevents a country from forcing somebody like Microsoft or Apple to give up their source code for "inspection" in order to access their market. It also helps to prevent States from demanding and acquiring encryption or other private keys (there's a separate section that also explicitly forbids mandating backdoors be added).

The "parties" of a treaty are governments. This has nothing to do with GPL. This is saying that a government can't say "you aren't allowed to sell software in the country of Frain as a non-Frainian unless you provide the source code for that product (whether to the end user or to the government)". They leave an exception for "critical infrastructure", because it was hard to argue that the government of Frain shouldn't be able to require that nuclear control software come with source code. Essentially, I don't see why this clause is concerning. It is clearly a form of pandering to the interests of software developers reliant on intellectual property rights, but only in a way that seems to me mostly about forcing capitalism on nation states that might disagree with its premise.

So in short, if I understand this correctly, the US government (and any other government party to the treaty) will for example be unable to insist that Volkswagen (or any other manufacturer) open source their future emissions control software (as a condition for regulatory compliance) ?

An interesting side effect of this would be the invalidation of the Nevada law requiring the source code for all electronic gambling machines be disclosed in order to operate in that state.

It seems like it would also apply to new or existing laws requiring the disclosure of code inside proprietary voting machines, medical equipment, and of course, the Volkswagon ECU. Then again, could those things be considered "critical infrastructure"?

Does anyone know anything about the authors, Knowledge Ecology International, or their predecessor Consumer Project on Technology (CPTech)? They look interesting but their about page doesn't tell me very much.

If a government wanted to give out Linux PCs to children.Then, the students could require the government to provide the open source software as it part of the copyright condition of Linux.But the government couldn't require the distributor of the Linux PC to provide the source code.What happens? Would it be illegal for the government to buy Linux PCs for civilians?Note: a Linux PC could be a smart card used for identification, voting, a licence, etc.

So once this becomes law ( and surely it will ) how do these finer points of the law get decided, will it be done by the arbitration panel, ie the high paid lawyers who take turns being plaintiff, defendant, and judge?

This TPP is such bad news. I've never been politically active enough to want to "run a campaign" but honestly, this thing is really motivating me to take time out of my busy schedule ... I feel like it's such an uphill battle to get this thing defeated.

Actually this should (and I believe some day will) be mandatory. Everyone who wants to take money for software should be obliged to disclose full source code to purchaser. In case of mass market software it would be just publishing the source code.

As products grow in complexity and corporation grow in power the only way to secure safety of the public would be to prevent corporations from profiting from secrecy.

"We could do it for $1 Trillion with liquid-fueled Molten Salt Reactors, on the same amount of land, but with no water cooling, no risk of meltdowns, and the ability to use our stockpiles of nuclear waste as a secondary fuel."

This is not a production-ready technology, though. I believe there are lingering problems with corrosion. And the claim that the MSR doesn't require secondary water cooling is odd: what's the turbine working fluid heat dump supposed to be?

I really object to phrasing energy policy as either/or. Build out renewables, now, because that's ready. Let's give the MSR a fair go at getting to production-ready, see if the problems can be worked out.

Well, we'll know soon how the new AP-1000 reactor works. The first unit starts up next year, in China.[1] The first US unit should start up in 2019. It's a boring old pressurized water reactor and should work.

The history of large exotic reactor designs is poor. Sodium-cooled reactors have sodium fires. Helium-cooled reactors have helium leaks (The Ft. St. Vrain story is sad; good idea, but some badly designed components in the radioactive section.) Pebble bed reactors jam. (A small one in Germany is jammed, shut down, and can't be decommissioned.) Molten salt reactors require an on-site chemical plant which processes the radioactive molten salt. Chemical plants for radioactive materials are a huge headache and have the potential to leak. With pressurized water reactors, you only have to handle water, not radioactive fluorine salts.

All designs where the radioactive portion of the system has much complexity have had major problems. Fixing anything in the radioactive part is extremely difficult. But the reactor has to run for decades to be profitable.

Actually, renewable energy is, when you actually run the numbers in a sensible way, pretty cost effective: http://www.sciencedirect.com/science/article/pii/S0378775312... - this finds that with 90% solar/wind and modest amounts of storage, electricity would be cheaper in 2030 than it is today, and that the cheapest option is actually a vast overcapacity. There are plenty of flaws with that article, but still less than this post.

I'm in the awkward position of supporting nuclear power in a country that has its anti-nuclear stance apart of its national identity. New Zealands Prime Minster David Lange famously argued against it at an Oxford Union debate, and ever since kiwis have viewed it as us standing against the 'big guy'.

Some people I've spoken to view this as on par with not supporting the All Blacks. To top this off, they typically have an irrational fear of nuclear power steaming from pop culture such as The Simpsons.

It's New Zealand's dirty little secret that we're no where near the "100% pure" ad campaigns we're running. Half our rivers are polluted beyond repair. We have less forest coverage than Japan. We flooded vast tracts of land for our dams. And we're still dependent on non renewables for our electricity.

The article looks at the cost of energy over the lifetime of the nuclear power plant. There is no argument that energy generated through fission is very cheap when looked at that way. However that totally ignores the enormous startup costs.

The great thing about wind and solar is that you don't have to build a whole farm. You can start small and keep adding as you come across more capital.

In any case I don't see why one needs to make it a dichotomy. The entities who invest in alternative energy are probably not the same ones who could invest in a nuclear power plant because of the above mentioned startup costs.

It's not clear to me whether fission will come back any time soon but wind and solar will keep gaining in market share.

The biggest problem with nuclear is making the numbers work. You end up paying something like 80% of the cost 1 GW/year * 50 years before you get your first cent in revenue. That's a very tough thing to finance. Cost and especially time overruns during construction can easily tank the project financially. About the only way to make it work is to be a regulated utility that has a long term captured audience for its power. One that's very likely to be the same size or bigger for the next half century. Even there you still have to worry about technological change pulling the carpet out from under you in 20 or 30 years.

The German wikipedia does have some interesting references to reports on thorium reactors from the British National Nuclear Laboratories [1][2]:

> In the foreseeable future (up to the next 20 years), the only realistic prospect for deploying thorium fuels on a commercial basis would be in existing and new build LWRs (e.g., AP1000 and EPR) or PHWRs (e.g., Candu reactors). Thorium fuel concepts which require first the construction of new reactor types (such as High Temperature Reactor (HTR), fast reactors and Accelerator Driven Systems (ADS)) are regarded as viable only in the much longer term (of the order of 40+ years minimum) as this is the length of time before these reactors are expected to be designed, built and reach commercial maturity.

The word "centralized" does not appear anywhere in their post, which tells me that they are not thinking about that as an issue. But it is an important bias of nuclear energy, the bias toward centralization. One of the benefits of solar is that it can be either centralized, or decentralized, or a hybrid of the two. Being less biased in this way, it opens up more flexible options.

Another important aspect of solar is that its performance is a moving target. Because solar cells are improving over time, comparisons against them need to be kept updated, or else the underlying assumptions of the comparison are invalid.

I have nothing against nuclear energy, but I have a problem with "let's just put the waste, idunno, here and let it sit for a few thousand years".

In Germany, several of the energy companies completley distanced themselves from the waste they produce. I was at a conference once, where one of the heads of EnKK (a dauthger of EnBW) said after beeing asked what he things his responsibilities for the waste are:

"Well you know, you don't care what happens to your waste at home. Look to the law, we are not responsible."

I think this is one of the biggest reasons why nuclear power has run its course. I might be feasable for contries like the US, Russia or China to find a spot where to store their nuclear waste, but in densly populated areas in Europe? No way.

Utilities are mostly free to build nuclear power plants. They don't because a) they can afford the construction cost, and b) they can't afford the insurance cost. So the nuclear industry says "no problem, we'll just ask the government to subsidize the construction cost and pass laws to shield you from liability". But they are finding that the "public risk - private profit" paradigm doesn't sell so well anymore for some reason.

A nuclear researcher from iThemba Labs gave a talk at University of Cape Town shortly after the Fukushima accident.

1. During his talk, he mentioned that nuclear plants are designed to have a very low probability of Chernobyl-scale failure, and that the current rate of Chernobyl-scale failure given the number of nuclear plants in production is roughly 1 every 20 years.

2. He eventually concluded his talk by saying that we should double the number of nuclear plants in production around the world to remove our dependence on coal.

I asked a question at the end of his talk, given point 1. then point 2. would mean 1 Chernobyl every 10 years. He was completely dumbfounded. He had never combined his ideas with simple probability theory. One physics student present then said angrily: "Yes, but we will get better at building nuclear plants."

Or just install solar roofs with power walls in every single american home for 1 trillion dollars and be done with it. No need to occupy extra land or run expensive machinery and power lines all over the country.

I don't know anything about nuclear energy, but a friend who works in nuclear tech told me that the reason thorium as an energy source hasn't been embraced is because its byproduct can be used for weapons. That doesn't look to be exactly accurate (based on a few minutes of googling), but there might be some truth to it.

Its a nice write up but I don't believe it. It focusses to much on capacity factor, which is an interesting easy measure. But not nearly as important as one might expect. The key measure is actually dispatch-ability. i.e. how quick can you turn it on and off again. In markets with energy trading this is key for profitability.

The CSP plants with molten salt storage do really well on the market because they can turn on and off as fast as gas peaker plants. (For as long as they have storage of course). They will off course have a low capacity utilisation as they are peakers in all but name (aimed at daily evening peak load).

Nuclear has a high capacity factor, not because they are base load. But because once they are on they are on. And being on they will sell all they produce for as long as possible. Driving all other producers of the market. i.e. in the current market with 20% nuclear in the US they are rarely forced off the grid because demand is always greater than what they can supply. Once we hit 50% peak load supply by nuclear this high capacity factor will drop. For the simple fact that while a Nuclear power plant can produce, no one is buying.

So for a whole system using just nuclear capacity factor will be around 60% not 90% due to market realities on the supply side.

Wind is at 40% capacity factor, not because of technical limits (one could build 90% capacity factor windmills if you where crazy, you just derate the generator but keep the same blades, e.g. put an 1mw generator in a turbine designed for 3mw and you will get your 1mw most of the time, but never capture the 3mw you could some of the time)

Wind power plants are repowered at the moment after 20 years, not due to limitations of the tech. But because turbines made today are so much better than those of 20 years ago. Most of these old windmills are actually resold on the second market, and its quite difficult to get enough 500kw mills at a good price. Also upto 2/3 of the value of a wind mill is in its steel tower, value that exists in 20 years as much as it does today (little rust and easy to recycle)

He also takes 2.5mw turbines from GE as an example but those are rather small these days. They are heading into the 8mw territory today and we will see 12MW plus mills in the coming years.

Critically, looking at it on an economic perspective. Once you start building a wind farm, it takes 18 days to build on site a turbine, from base to grid connection. That means you know if its going to work in a month. This leads to easy financing compared to nuclear, where the average best case build time is 4 years, which often descends into decades of building.Financing wise that is a completely different game.

Also wind and solar power are added at 100Mw order of magnitude to the grid in single year projects. This is better for project financing and risk management. Panel in project phase1 bad buy a different one for phase2, same for wind turbines.

All in all 500, 1000APs could not be build in 20 years even if we lined them up one after the other. The infra in forging and assembly is just not there, not even if we went all in on it.

Wind and solar can be build at that scale. Because its distributed manufacturing.Best case nuclear numbers barely meet offshore wind numbers today. Wind which has manufacturing on scale benefits that nuclear does not, will reduce costs much faster than nuclear.

PV efficiency will go up, e.g. look at the first solar efficiency roadmap.

If you are buying a power plant, then you will quickly see that Nuclear is not a cheap or easy to finance option. A 1 billion windfarm not working out, you cancel midway in construction with 500 million of pain. A 4 billion nuclear power plant not working out, you are decade down stream with a 10 billion bill. 4 billion is a sum few electricity companies can gather, 500 million borrowing for 18 days during construction after which a wind farm is sold on a power purchase agreement made is much easier for many more electricity companies. Think: if you owe 1 million the bank owns you, if you owe 100 million you own the bank...

Nuclear should work, and if it did it would sell like cup cakes. The problem really is that these cup cakes cost as much as a trip to the moon... You might wonder why I am bringing up financing so often. In a energy market like the EU or US where there is enough current generation this is key for a plant/farm to be build or not. Often you can only finance if you can drive an other generator off the market, by being cheaper.

Solar and wind benefit from the law of large numbers, nuclear does not. If a nuclear plant goes into maintenance you lose 1gw. If a turbine goes down you lose 10mw, a substation blows up you lose 100mw. A nuclear power plant transformer fire, and you lose 1gw in a minute, is a major grid issue. Design issue in your plant, 4gw offline for regulatory reasons, major grid issue.

Nuclear power plants are to big and to expensive, small modular nuclear could work. But no serious market players are in this field.

Energy storage is currently still to expensive, but even there prices are dropping utility scale power storage is not just pumped storage. (Funnily enough in the UK this was build because of the nuclear investment). Hydrogen, heat, batteries, compressed air, fly wheels are all being investigated and each has deployments in the market. Currently still rather specialised depending on local market needs, but getting closer to taking on gas peakers.

It's the sad truth that people seem very for nuclear, but no one wants to live near it. Of course, I've seen articles regarding people who don't want to live near wind turbines, so maybe people are just far to fickle.

Well once again an article that fails to address that the energy production issue could be addressed with controlling our energy wants instead of mindlessly using more and more combined with local self-production instead of centralized giant production plants and transportation.

Let's not forget that nuclear plants are basically huge and optimized steam dynamos and that a breakthrough in producing electricity would be a game changer.

Every form of energy generation has positive and negative points, to claim that one is the best is oversimplification. In fact when I met people who worked for that industry their argument to me was that no single source of energy generation could meet full demand. To me the real prize in the energy game is efficiency, and there's quite a bit of wasted energy if you look around.

The article neglects to mention the insurance costs of running a nuclear power plant versus the insurance costs of running solar / wind. It's nice to just hand wave away insurance, except when you discover that only nation states have the ability to insure these things.

I find the argument about area needed spurious. By using the same point, I could claim that there is no way automobiles will succeed in society, because of the enormous area that would need to get paved... except that doesn't seem to have stopped us.

I wonder why hacker news is so positive on nuclear power. The article is full of false information or misguided information.I always recommend "Into Eternity" to understand why nuclear is one of the worst options for eternity ;) https://en.wikipedia.org/wiki/Into_Eternity_(film)

This article is pretty terrible. The bias is just ridiculous. When discussing wind and solar the author constantly makes assumptions that favor worst cases, and makes questionable comparisons. For example; stating that "more Americans" died from installing rooftop solar than have died from construction or use of nuclear power, cherry picking the most dangerous possible activity related to solar power (some guy up on his roof, which is a dangerous activity with or without solar panels) to the least dangerous thing about nuclear energy (professional contractors constructing the plants and the plants running normally.)

He cites figures when they will be favorable for nuclear or impressive for the point he's making, and leaves them out when they would undermine it. He cherry picks American nuclear experience in the examples above because we have so far avoided truly terrible disaster here with regards to nuclear energy. (Three Mile Island wasn't good but it wasn't Chernobyl or Fukushima.)

He also cites figures for the amount of space needed for solar and wind to replace all current forms of power without sources. Never mind the fact that actually replacing all of our power with renewables is something that will take a century or more, nobody's talking about completely replacing our entire power generation system in the next few decades with renewables or nuclear.

The real question is not "what can replace our whole system today." because the answer to that is nothing. The real question is, as we expand and replace generators that are being decommissioned, what should they be replaced with?

The concerns there people have about nuclear are not about whether it's more cost effective than renewables (if all we cared about was cost we'd keep burning coal, we know we can't do that), or whether it's safer to build, but what is the long term effect and what are the long term dangers. The long term dangers of a solar farm are basically nothing. You cannot get a Fukushima like disaster out of a solar plant.

The pro-nuclear side will tell you "Oh the new reactors are totally safe, you could never have a problem like that." But they've always said that about nuclear plants. "Oh this new design is safe." Then a disaster happens and they say "Oh well that was the old design, the new design is safe."

Considering all the delays involved in getting up new plants (both technical and political) we're looking at what, a decade out for something with a break-even point of several decades? How far will renewables be then? There might be some logic to further invest in renewables instead of spending hundreds of millions of dollars on a stop-gap solution that's guaranteed to just take us to peak uranium sooner than later. That said, we shouldn't be decommissioning existing plants for political reasons like the Germans are.

From a personal perspective I think I'm using less electricity than ever. I only have CFL/LED bulbs, flat screen TV's use less power than old tubes, every appliance I own is tons more efficient than the stuff just a generation ago, etc. Heck, even my powerful desktop PC uses a lot less power than before.

So nuclear is cheaper if one takes the numbers from the industry for cost, while ignoring the problem of spend fuel. And ignoring the actual main cost of nuclear power, which empirically is the property damage in Pripyat and Fukushima.

Nuclear energy is very expensive when adjusted for building the plant. Companies building those plants know this and their usual tactic is to severely underbudget, knowing that a government cannot refuse to pay because it is committed. The Olkiluoto plant in Finland was estimated at 3bn before start and at the most recent revision the estimation was 9bn with a 10 year delay. The plant is not yet operational and I expect there will be even further delays. Don't drink the nuclear cool aid.

It seems to me that the recent few articles on fusion have inspired a lot of people - who now seem to be thinking "hey you know what would be great? If we moved back to fission reactors!"

I'm not talking about people writing these articles. I'm sure there's one appearing on the Web every week. I'm talking about people who are suddenly upvoting these articles.

But to me that doesn't make any sense. Fusion, I get. It's something like 10x more efficient than fission, and it's not radioactive or as dangerous as fission. But just because I support having more companies and research into fusion, does not mean that I would support fission reactors.

Despite this article, solar power is still the most practical way of getting renewable energy in the next 20 years. Thinking about building new fission reactors is like getting excited about some "breakthrough gas-powered engine that uses 50% less fuel", when everyone is already thinking about getting an EV for their next car.

I'm really tired of all the internet kids plugging nuclear. The numbers don't work, and the cost of failure is huge. We can put solar on everyone's roof and turbines in windy areas and it's completely safe, dumb nuts simple, and decentralized. Nuclear energy is dead. Let it go.

I'm not willing to live next to a reactor and I'm not willing to force someone else to. If people really think nuclear is the best option put your money where your mouth is and raise your family next to one.

There is nothing that can guarantee that this ride is real, but here are a few things:

1. However big is MMM, it's probably too small for the size of the bitcoin economy right now. Remember that if you are transacting in bitcoin, there is one party buying and the other one selling. So you are not rising the price to the moon.

There will be a big drop at some point, no one really wants to sell when it's rising like this. This sort of upward movement is exactly why a lot of people have bought Bitcoin. Trying to time the top will be hard though. I imagine a subsequent drop of ~25% would be likely.

Could somebody comment on how "future proof" the bitcoin technology really is? How large can the trading volumes become, and is there a limit? What advances have been made in crypto-currencies that will replace bitcoin?

There is a LOT of talk about "blockchain technology" going around these days. It seems like another "cloud", i.e. some vague concept that consultants can sell to CTOs who are afraid of appearing behind the technology curve. New clothes for the emperor.

Since bitcoin is the reference implementation of "blockchain technology" obviously all this interest rubs off. And since bitcoin at this point is all promise without much reality behind it (like a "pre-monetization" startup), it is a great vehicle for hype driven speculation, meaning that any price increase drives further speculation, until the next hard crash.

I slowly bought about 25 BTC since January. Honestly this is very exciting right now and I've basically been daytrading for the last few days. I'm no expert by any stretch of the imagination, but it's been enjoyable to set alerts for different price points then buy/sell as necessary. I like the liquidity and 0 government involvement/taxation - something I couldn't do with stock/options/CDs etc.

From a technical standpoint, BTC formed a nice base around $200-$250 over most of 2015. It was mostly quiet in the news, so it was an ideal time to invest. Now it is breaking out to the upside of that base. Sell when your neighbor tells you to invest in bitcoin.

There is a chance the price could go nuts (e.g. $1 million/BTC) - it could be a serious bubble, because of the technology behind it, and the implications it could have to the world.

"It is one of the great paradoxes of the stock market that what seems too high usually goes higher and what seems too low usually goes lower." - William O'Neil

For anyone that has been around BTC for long enough, this is expected, but also expect more rallies and more crashes.

Quick plug: If you are looking for a secure wallet with good privacy for your fresh Bitcoin, Try [Coinkite Multisig](https://coinkite.com/multisig), up to m-of-15, any/all keys can be generated offline, option to escrow backup, notifications, multi-user-multisig, support, notifications, works well with Ledgers, and the list goes on!

Actually the break of price parity between China and Western exchanges is historically very bad news for bitcoin. It will mark China as purely speculative and will break ties with the only force that was keeping the price at the 300 USD level in the past months.

It's similar to what happened with MTGox. When the price parity was over (although there was always a ~1-2% premium on MtGox price) bitcoin crashed hard and never recovered.

The developers are still arguing about the 1MB blocksize limit. Currently bitcoin is limited to a theoretical maximum of 7 transactions per second (~60k transactions per day). This figure was derived assuming the smallest possible transaction size but of course real transactions are larger and the practical maximum is only about 2-3 transactions per second (~20k transactions per day). This should have been a pretty easy limit to change, it's just a #define in the code. But some core developers are against it and we're deadlocked and have been this way for at least a year.

A number of DoS attacks on the bitcoin network: some based on spamming lots of fee paying transactions, some based on retransmitting slightly modified versions of existing transactions, have rendered it practically inoperable for days on end. The fee paying attack isn't terribly expensive, costing only a few tens of thousands of dollars per day, while the "malleability" attack costs literally nothing other than an internet connection. There hasn't been any significant progress on solutions for either of these problems.

The bitcoiner response to this is that you shouldn't have been affected by either of these if you used the right client or paid the right fees, but imperfect clients and fixed fees are a reality of the bitcoin ecosystem so I don't see how wishing them away changes anything.

One of bitcoin's biggest advantages was supposed to be the ability to quickly react to changing circumstances by modifying the protocol. Recent events have shown this is pretty much impossible: too many people are too heavily invested in its current form, and they want no risks and hence no change. Techniques which supposedly allow risk free changes (called sidechains) still have a long way to go before we can be convinced they work as intended. Besides they're not even fully implemented on the bitcoin testnet, let alone real bitcoin.

Overall, even if people are buying into the "blockchain" tech, it doesn't seem to me that they would be interested in bitcoin itself because of the problems I've outlined. I think the safe money is still on this being a repeat of the Willybot/Mt Gox fiasco where the exchange operator used nonexistent dollars to buy bitcoin and drive the price higher. Other people were sucked in by the uptrend and they ended up with nonexistent bitcoin/dollars which they weren't able to withdraw from the exchange.

For some reason the website fails to prominently mention the two defining characteristics: it's based on Dart and it uses its own widget implementation.

Honestly, between the use of a relatively exotic language of dubious quality with types that are "optional and unsound" (https://www.dartlang.org/articles/why-dart-types/), their custom widget implementation that who knows how well is going to work, and lack of web support, it doesn't seem very attractive.

And at the same time another group at Google is working on Singular, again cross platform development for Android, iOS & Web (web is not for Flutter), which works in AnuglarJS-like fashion. There is a difference though, Singular doesn't go into widgets (you should use platform specific). Maybe they will be able to merry them, but I doubt, so we will need to choose.

"Flutter is optimized for 2D mobile apps that want to run in both Android and iOS. Apps that use Material Design are particularly well suited for Flutter."

Which means that, if Flutter succeeds, iOS users should expect to see more and more apps built using Google's Material Design language. It's already happening for some of Google's iOS apps--the floating action button in Calendar, Hangouts, Photos, Docs, the iconset in most of their iOS apps (Gmail and Chrome are partial exceptions), etc.

In other words, Material Design is making significant inroads into iOS. But Apple's design language isn't doing the same in Android world.

Disclaimer: I'm a cofounder of Recent News (https://recent.io/), and we borrowed some Material Design concepts for the iOS version of our app.

I wouldn't be surprised if this is Google's longer-term solution to the Java problem. It's clearly not tenable for them to stay on Java 6 forever so they either need to make web-based apps first class citizens on Android or put forward their own new stack.

I investigated Flutter some time ago after seeing it mentioned on HN, and it is quite fascinating - as an Android user. It has a stated goal of 120fps on all UI elements, which would be a huge step in bringing sluggish-feeling Android up to the levels of iOS.

But Flutter is also intended as a cross-platform framework for iOS too. That part I'm less sure about, though if it lives up to its promise, it would be great. Overall though, far too early to tell. Using Dart might not be popular, but if you don't already know Objective C/Swift and Java, it might not be any worse than today.

The engines C/C++ code is compiled with Androids NDK, and the majority of the framework and application code is running on the Dart VM. The Dart VM generates JIT-compiled optimized native code on the device. (In other words, the Dart VM is not an interpreter.)

So Google is officially making its stand as Dart being the future of Android (the Swift of Android?).

Anybody who has worked on this and Kotlin, care to compare ?

EDIT: how does this work inside Google ? I mean you have legendary heavyweights behind Go. Were they designed as domain specific (Go for server and Dart for mobile)?

The amount of JavaScript-is-God philosophies on here is excruciating.... I respect everyone who likes JavaScript, but just because Dart isn't what you like doesn't make it inherently useless or bad. The world is all about diversity, how about some language/preference diversity?

In my opinion, JavaScript is a large flame, and all the languages that were inspired by it are the sparks. Eventually the JavaScript flame might die out, but the sparks it makes can still burn, and create their own flame.

I'm a huge lover of Dart, and I generally dislike JavaScript. My advice is to welcome Flutter. If it works out, it will work well, if it doesn't work out, it will fade away.

Don't we have enough cross platform dev kits anyways?I fail to see the need for so many more platforms.The state of the ART/DVM is going to become like that of the JVM. Maybe as devices become more powerful,it would be capable of handling such needs.But why would you want to develop such apps which you would have to wait till the native platforms release updates which are then ported to these platforms.Android updates happen every 3 months.Maintaining evergreen apps in such cycles is a task in itself.

I understand the evolution of Javascript based frameworks considering the large number of developers available.But now days you have to spend more time learning a framework in comparison to developing something with it.

The only feature which seemed to grab my attention was :

> Can I update my app over the network, outside of the Play Store?Yes. On Android, you can update your app over the network (via HTTP), without first publishing to the Play Store. This can be useful because it doesnt bother the user with a notification, ensures your users are on the latest version, makes it easier to run A/B experiments, and more.

This would be extremely useful. But I think there would be a catch to this too.I am actually tired of seeing new frameworks come up everyday,especially orgs like Google.

if a flow control structure's statement is one line long, then don't use braces around it, unless it's part of an "if" chain and any of the other blocks have more than one line. (Keeping the code free of boilerplate or redundant punctuation keeps it concise and readable.)

This is very new from the looks so who know how it will play out, at the moment I am not excited about it. Sounds like Java swing where you could create cross platform apps but the UI was not the platforms native components and so never quite felt right.

Optimized 2D engine? Hmm, is there any chance that there's a hardware-accelerated version of Skia somewhere in there? A while ago I was looking for a nice OpenGL vector (read: path rendering) library for an iOS app I was working on, but none of the available options at the time were good enough. Ended up hacking together a custom implementation in Cocos2d. (Actually, still hacking...)

It's quite early to call it either way but in my quick testing it seemed not any more performant than a webview for traditional tasks like scrolling through lists of "material-design" elements (eg: text), but then it surprisingly did very well with 2D graphics. A bit of lag in the game, a bit of lag scrolling, but maybe it'll get better.

* It's illegal to not provide, when questioned, the encryption key of a device in your possession.

* ISP Logging.

I've wanted to be in tech all my life and I felt that british people have facilitated a lot of good things in the tech world- but I have never been so ashamed to carry my passport. This country is one that had great laws for librarians especially after world war 2 which aided in privacy of the people.

but now, we seem to have forgotten that once data is collected, it can be used to target and harm people in swathes- it can be used actively to destroy individual people- or even, in moderation, can cause people to self-censor (which carries it's own problems).

I'm a British citizen, I will not return to the UK while archaic laws and boneheaded policy makers are eroding the very fabric of computer culture. Looks like the next election is in 2020.

The scary thing about web history logging is that it makes you question your web habits, if not become actively paranoid.

For instance, the article quotes the head of MI5 regarding preventing the bombing of the London Stock Exchange in 2010.

I wanted to know more about this, so Googled London Stock Exchange Bomb, and clicked on a few stories, and wanting to find out a bit more about the people involved, I then Googled their names and clicked on a few more links.

All this time, I had the thought at the back of my head: will these searches and clicks put me on a list somewhere?

It's this feeling that I most dislike about it all; something, or someone, somewhere may be watching, and so now I'm questioning myself because some discussion on some site has potentially questionable keywords in its URL.

I used to want to visit the UK, not so much anymore... when you find yourself mulling over how best to protect yourself in the same way you'd prep for attending something like defcon, it sort of loses its zeal.

Edit: wow, the downvotes are coming fast on this one, guess i found a nerve. needle

That ... would be what MI5 is for. My objections only begin when law enforcement start getting access to spook grade data (which means I'd object to them sharing with the NSA, who're clearly rather permeable to law enforcement currently).

Tadaa ... . The surprising fact for me is when I was talking with mostly programmer located in USA in Reddit (I am not from us), they didn't even care about NSA/etc agency collect their data. They act in a way I thought they think their data belongs to NSA. that really got me to thinking. This is my right as human being to have privacy.

Garry is one of the nicest, most helpful people I know. 5 years ago, he took a meeting with us, even though we had a barely functioning prototype, and helped us prep for our YC interview. After our first meeting, he even offered to meet up again. I was blown away by how generous he was with his time. We got into YC and hes been an incredible advisor ever since.

Hes one of the few people I like to think of when I think of Silicon Valley at its best. He genuinely loves product and the creation process. Hes hugely helpful and hes a great person, through and through.

End of an era! While a lot of ex-founder partner/VC people seem to think mostly about nation-building, vision, and the marketing buzzwords you'd find on an analytics dashboard, Garry's advice is always extremely concrete and close the the metal. When success came, he never rolled down his sleeves and stopped building and that's something most of us around here can really respect.

One fond memory that resonates well with how he thinks and works with other people was in an office hour we had with him. I asked him why he does what he does and his response was simple: "I do this to help people get out of the rat race."

Garry is an incredible person and like most remarkable people he is also deeply humble. This kindness and approachable spirit, I think, allow him to be a great student of founders, startups, and cultures. I've learned quite a bit from Garry about how systems evolve and operate. He is a very rare person and a gift to the startup world.

Thanks Garry for all that you've done for the YC community. I wish you the very best in this next leg of your journey.

Garry's advice and support throughout YC (and afterwards) was unparalleled, but more importantly, he's just a really good person. I'll never forget when he broke out Photoshop during office hours and redesigned our app on the spot!

Hi Garry. I interviewed for W11 with my brother Andy. We flew out from Boston. We rented an RV and lived in it for a week before the interview. That fall was the first year the interviews were split between two rooms. We pitched a startup that helped people rent things (bouncy castles, excavators, photo booths etc) for temporary use. The interview didn't go well. I was fumbling all over and remember feeling hopeless. Then you stepped in and asked a bunch of optimistic product questions, driving the interview to a better place. We didn't end up getting in, but to this day I still remember the experience I had with you and consider the interview one of the best things I ever did. Thank you.

Garry, Thank you for your advice while we were there. I think I came in with my startup around the same time you came in as a partner.

I was always impressed with your calm demeanor and thoughtful pointers. YC is, in many ways a frenzied whirlwind for founders and it was great to have people like you around who made me feel that everything was going to work out OK.

Will definitely be missed. Garry is exceptionally kind, sharp, and helpful in an organization full of exceptional, kind and helpful people. Hope to work with you again, but love the decision you're making to spend time with family. Can't go wrong with investing in family time!

Never got a chance to meet Garry but always enjoyed following his blog and the insights/advice he shared. He always seemed incredibly genuine and vested in the success of others. Best of luck in this next chapter, Garry!

Having made the same choice and taking time off after my child was born, I totally understand. Thanks Garry!

Although I then went and started a startup, so I guess slightly different. :) Turns out starting the startup has actually given me a lot of time to spend with the kid since I work from home! Hint hint. ;)

Take care! One thing that I want to point out (and it's something I hate to point out) but would it be a good idea to edit out the upcoming trip and how long it'll be? Normally it's the best idea to not post, publicly, where you're going and how long you'll be gone for and Gary is semi-well known.

I'm not fooling myself into seeing this as anything but a "you scratch my back, I'll scratch yours" setup. RedHat is getting a ton out of this, and so is Microsoft.

Who's the main target for Azure? Enterprise companies who trust Microsoft implicitly. When an exec comes to the head of IT and says "we need to be on the cloud! I read about it!", Azure eases the transition by being able to go to vendor you've already been using for a dozen years.

RedHat's core audience is enterprise as well. RHEL is the de facto standard for that level of infrastructure due to the support you can get versus any of the distributions that are equally as good, but minus the support contracts.

So, they're helping each other out and that's good in my humble opinion.

Microsoft's new direction under their new CEO is one surprise after another. I've only tinkered with Azure so far but makes me want to pay MS more attention than I would have a couple years ago.

In the last 5 years we've experimented with Linux on top of hyper-v a few times.

The basic I/O, compile & runtime performance was significantly inferior to xen & kvm (didn't bench against vmware, phasing it out due to cost), it wasn't worth any effort to even deploy apps for testing.

Therefore I don't see Linux on hyper-v being a compelling option for the cost conscientious technical officer or lead engineer.

This isn't really new, it's just Red Hat being late to the party. From Jun 6, 2012:

"The Linux services will go live on Azure at 4 a.m. EDT on Thursday. At that time, the Azure portal will offer a number of Linux distributions, including Suse Linux Enterprise Server 11 SP2, OpenSuse 12.01, CentOS 6.2 and Canonical Ubuntu 12.04. Azure users will be able to choose and deploy a Linux distribution from the Microsoft Windows Azure Image Gallery."

"Collaboration on .NET for a new generation of application development capabilities, providing access to .NET technologies across Red Hat offerings, including OpenShift and Red Hat Enterprise Linux, which will be available within the next few weeks."

Further development of .NET as cross-platform, not just Windows-based? That could bode well for the stack.

I doubt I'll ever write .NET code again, but this seems like a sensible decision to me.

I think this is just ticking a box for Microsoft since I believe you can get RH on AWS. But let's be honest - how many people will perceive any value in stacking a Microsoft technology (.NET) on top of UNIX? Sure if you already have a .NET app, hosting it in UNIX may give you some benefit. But is anyone really going to write an app from scratch with this in mind? I'm skeptical.

I remember when Microsoft teamed up with IBM on OS/2, so I'd predict that Microsoft comes out with its own brand of Linux within 5 years. That will give them time to learn what they need to include/exclude and support.

"Collaboration on .NET for a new generation of application development capabilities, providing access to .NET technologies across Red Hat offerings, including OpenShift and Red Hat Enterprise Linux, which will be available within the next few weeks."

Xplat .net is coming to RC1 in a couple of weeks (per roadmap: https://github.com/aspnet/Home/wiki/Roadmap) and it's exciting to see that RHEL will support it. It makes sense for Microsoft, traditionally an enterprise company on the backend, to partner with a *nix company with, primarily, enterprise clients on the backend.

If nothing else, the toolset microsoft brings to the table will raise all boats on the -nix side IMO.

"During the Bush administration, people were kidnapped all over the world and dumped in secret prisons, where they were tortured. During the Obama administration, the kidnappings, the secret prisons and the torture, have been replaced by death lists and extrajudicial executions of people, carried out by pilotless aircrafts, known as drones."I spent hours and hours in 2007/08 watching Obama with the hope that change is real this time. And now, it's painful just to hear his name. With the current candidates on either side, just bracing for worse.

Technology is truly augmenting ourselves and this medium "shapes the scale and form of human association and action", as Marshall McLuhan once said.

With that given said, compare a whistleblower, say 20 years ago, with one today. Snowden not only had the world's greatest communication platform at his disposal, to disseminate whatever information he cared about, but now he can still address millions of people, speaking at the world's greatest universities and giving interviews, while being in exile.

Regardless on where you stand on these privacy/spying issues, I think it's hard to deny the fact that he started a dialog, and now the entire world can be part of it.

But he was on his way to Cuba, which would have been his choice. It would have been a bad choice, because the US would have grabbed him there. It's a pretty quick jaunt from anywhere in Cuba to the US base at Gitmo.

What this article makes clear is that he is heavily guarded by the Russians. It's not a coincidence that this meeting took place in a hotel filled with high ranking Russian military. Would Cuba have afforded him the same level of protection?

"Edward Snowden reached 1.5 million followers in no time. He only follows one himself the NSAs official account."

That's funny.

On an unrelated note, what a hero this man is. The US should consider itself fortunate that it has people who at great personal cost would expose wrongdoing. It's a pity the irrationally scared public doesn't consider him a hero.

I wonder what is the future for him. I guess when the next president comes in office, and those leak story blow over, maybe he'll get pardoned, or maybe he'll stay in Russia for the rest of his life. The sure thing is that nobody will forget him.

Although I wonder if it can be proven or argued (or not) that Snowden is not working with Russia. Maybe in the realm of intelligence nothing can really be proven, and it doesn't really mean anything for me to trust my gut about Snowden not working against the US. Are there any articles debunking those theories ?

When I think of Edward Snowden, a video comes to mind, where there are three cattle in a corral and a butcher kills one of them with a riffle from close range. The cow obviously doesn't quite make it out of that situation and basically just falls over on its side. The shot and the cow falling kind of startles the other two cows and they jump an take a few steps but then just kind of stand there and look around and slightly take a look at their fallen comrade, but otherwise go about just kind of standing there, continuing to do their cow things.

That's kind of how I see society. What Snowden revealed has been going on and it's really just the tip of the iceberg and it will only get worse. But what do we do? We say "that's not cool" and then get back to posting our whole life on facebook and trusting the assurances of the same government that does far more lying than not. Here we are, you are maintaining your own government surveillance dossier on facebook, with all the connections and associations listed and conveniently connected. It is any and all past authoritarian dictatorships' wildest dream they could have never even imagined. yet it continues, the business media proclaims that there is no stopping facebook's domination, which will include what Zuckerberg's slip-up from yesteryear of intending to fully replacing the internet even if just in perception of people's minds. (see his free access to facebook in emerging markets where he is trying to head off the internet becoming a thing in people's minds)

It will be quite interesting to see how this all plays out. I am not going to hide the fact that no matter how I look at it, even if things seem all rosy and nice and pretty now, there are far more wildly risky and probably catastrophic outcomes down the path society has and seems to insist on taking.

I've view Snowden has a hero in a time where it is very hard to stand up and cast a light on national wrongdoing. I often struggle when I return to the US Midwest and my family and others complain that Snowden is a traitor. I wonder why they can't see the obvious wrongdoing by our leadership and how it erodes our values.

Since its the Bible belt I often find myself reminding people of the story of David [1] who hid with the Philistines when his nation and leader of Israel and turned on him. The irony is almost overwhelming to them since Snowden so closely fits the exact profile of the story.

Some nights in my darker moments I worry that we and by extension myself, have become the bad guys in that story, more akin to the egotistical and delusional King Saul than David.

I'm curious to know what the in-the-wild breakage rate for FF's blocking feature is. I use Ghostery myself and I find that maybe 1 in 100 are broken. I feel like that could be 1 in 1000 if blockers implemented a Google Analytics stub -- by far the most commonly required script for things to work.

I truly appreciate how much time and effort Mozilla puts into Firefox privacy. I do wish however that some more effort was spent on stability and performance. It seems like every day I hear coworkers growling that Firefox has crashed on them and while I'm sure that plugins are a huge part of this it's not exactly easy for the end user to pinpoint where the issue is occurring.

1) The greatest challenge for Tracking Protection seems to me to be not technical, but strategic: How do you protect users from tracking without creating a backlash from the tracking industry and their customers (the whole Internet economy built on tracking) that will make the outcome worse or no better, and after an expensive battle. As an extreme example, if the next release of Firefox cut off tracking for all its users then I think there would be a war, including possibly lawsuits and an arms race between trackers and tracking protection. The users would be no better off (or worse off) and it would consume Mozilla's resources.

2) How do they plan to protect the great majority of users who lack the knowledge and skill to understand tracking protection? Remember that most users barely know they are being tracked, much less what that means or how it's done. Many users I deal with don't know the URL field from the search box on their home page; they don't understand what a web browser, web page, or remote server are, much less their components, requests, JavaScript, etc. They lack even the framework to begin understanding tracking. Most other end users I know would be overwhelmed by the concept and extra hassle to load a page. Also, how will most users understand why the webpage is malfunctioning, of all the possible reasons, and what to do about it? Maybe they'll think Firefox is simply broken. Maybe this is why Tracking Protection is available only in Private Browsing right now, and hidden behind a small, somewhat obscure icon (if I understand correctly); maybe that's a way of limiting it to more technically skilled users. Providing tracking protection to technical users has been done, via Ghostery, Disconnect, etc. I'd like to see it become available to everyone else. (That's not a criticism of Mozilla - this is a great, precedent-setting step forward, establishing that major browser vendors might block tracking and act in user interests over industry's, and hopefully creating some competition in that area.)

Maybe the first step is to raise awareness of tracking and the idea that users benefit from and should have the option for privacy, which can be done by simply telling users about Tracking Protection when they open Firefox after the update, whether or not they actually use it.

So if someone starts following me on the street and gets close enough to put their hand in one of my pockets and just continues on that way every morning I leave my house, for weeks or longer, I don't say, "Hey stranger, I'd like to make an argument why you should afford me some privacy tomorrow." I'd likely say, "Wtf person, you're being the kind of weird that gets the police called on people. Step back. Or better yet, go away to somewhere that I can't see you." But hey, this is the Internet, so let's just all stop thinking as though this all has anything to do with real life.

Seems like "Tracking protection" should be the default, not an opt-on. Like disabling "beacon.enabled" by default, for instance.

If you're using a general purpose blocker (uBlock, uMatrix, PoliceMan, AdBlock, etc, even NoScript), it makes sense to actually disable the internal blocker (less hooks/rules to parse), as FF's list is not even remotely comparable to what you get by subscribing to a couple of community-maintained ones, plus there's no convenient way to tweak the rules (something that other addons excel at).

FF internal tracking protection is somewhat nice for the casual user, and it's going to stir some extra polemics about content blocking (which I consider a positive thing), but it's nowhere as effective as the others. I fear it's also going to be circumvented more quickly, promoting more inline JS, supercookies and fingerprinting techniques.

Overall, it's not something that I would have included in FF from a purely pragmatical perspective. It's just opening Mozilla to direct liability, while not providing anything for the privacy conscious person.

Firefox, as of this update, is completely unstable on Mountain Lion. I have two notebooks running Mountain Lion, and both have the same issues. Opening new tabs results in this weird bug where the current page is changed to the about:blank page, sort of, and the new tab doesn't actually appear. I've used Firefox since 2.1, and this organization is now falling apart, in terms of producing a functioning product. How is it possible to make a worse browser in 2015 than they made in 2007 or so?

A "more private browsing experience" that still features the Pocket bundleware and button in the toolbar thats still not un-installable like other extensions and requires about:config edits to disable..

I haven't been thrilled with Mozilla's direction for a while now. Bloating Firefox by embedding Pocket and Hello into it, speeding up Firefox's release frequency just because Chrome does it that way, wasting resources on Firefox-OS, and firing an employee just for donating to a political cause.

If they are not careful, they are going to run Mozilla into the ground.

Unfortunately they have also yet again changed things internally that break plug-ins, apparently including various popular ones used for privacy and blocking purposes. So in reality, since I don't habitually browse in Private Browsing mode, the more private browsing I've experienced with the recent Firefox updates has involved more ads and trackers than I've seen in years, followed by a lot of frustration searching for replacement extensions that actually work and then still more frustration configuring things manually that used to just work a few months ago.

I really wish Mozilla would get back to promoting the add-ons model that once made Firefox so attractive, and prioritising flexibility and stability accordingly. Some of the other features they've added directly might be useful, but the price of the constant change is too high, and in just about every case I can think of the add-ons community already had good, working solutions.

It's a step in the right direction, but I'm afraid we need more than this.

> Since some Web pages may appear broken when elements that track behavior are blocked, weve made it easy to turn off Tracking Protection in Private Browsing for a particular site using the Control Center.

Whitelisting a whole site because it "appears broken" is a pretty weak approach, and clearly incentivizes "brokenness". I notice the spies (google etc.) are more intelligent and creative than the defenders of privacy.

We need a browser that can make such sites work - for the user. Without leaking any cross-site information. This involves rewriting URLs and cookies, or "mixmastering" identifiers across a cloud of users.

>Today were also releasing new visual editing tools in Firefox Developer Edition including Animation Tools that work the same way animators think.

To me this sounds like fiddling while Rome burns.Typical of their track record of wasting energy on irrelevant projects instead of making a great browser.

I don't think the title is accurate. The report seems to know exactly where the the money went. They just don't know why stuff was so expensive.

"Nobody works here anymore" is a classic excuse used in information requests like this.

These letters the DoD sent back sound exactly like the letters I read every day as a civil litigator during discovery for cases. The DoD is basically telling this inspector to fuck off and stop bothering us.

That's not to say the Pentagon has no idea where the money went. They are just not cooperating with the investigation. The DoD is saying "we fired those guys, you go find them yourself."

And it sounds like the Special Inspector General has a pretty good idea why the natural gas station was so expensive. The organization in charge didn't do a feasibility study. Then spent millions of dollars building a station when it wasn't a good idea.

Nobody stole the money. They just squandered it on a gas station to nowhere.

Edit: Unsurprisingly both the inspector general and the DoD Deputy Under Secretary are both trained lawyers. The inspector general was even a civil litigator up until a few years ago.

After reading the linked document, I can only conclude that the Pentagon did not, in fact, spend $800M. Instead, it had most of $800M stolen from it by shady contractors, and buried the details to avoid embarrassment.

This has been happening to the US federal government an awful lot lately. This is rather disturbing, and appears to be specific to recent times and specific to the United States.

"The Pentagon is the only federal agency that has not complied with a 1992 law that requires annual audits of all government departments. In 2009, Congress gave the department until 2017 to be audit-ready."

I mean we're talking about $600B/year un-auditable even in principle. Of course one can't say that DOD is negligent or non-responsive or not taking necessary actions - after all the DOD did create the "Office of Audit Readiness" which now manages the plans for achieving that readiness ... sometime after 2017 according to their recent updates.

I like this one - "lack of ability to maintain documentation to support transactions." and their plans&promises to buy ERP. That really puts them on track for audit readiness ... in the next century. And after all of that you're asking about meager $800M :)

And of course it is hard not to laugh seeing the Congress trying to threaten the DOD with not letting the DOD buy new toys :

"For failing to obtain an audit for fiscal years after FY2017, the bill [...] prohibits DOD from using funds for certain weapons, weapons systems, or platforms being acquired as a major defense acquisition program."

I think they probably do know, it's likely just excess spending on internal programs that probably failed and were later merged with bigger projects in a sense 'laundering' the money from public scrutiny. I mean just look at the CIA's mind control program from the 50's to 70's, they probably blew billions on a program that just got a few people on the government's payroll zonked on acid, scarred a few for life, and ultimately just led to the giant crowd of hippies on the white house lawn. Which hey, I say makes it money well spent! But tough to follow as far as a paper trail is concerned.

At least it appears that there is actually a filling station to show for the money. I worked with people that worked in Afghanistan as contractors for the US and UK governments and they told me that often millions would spent on building a new school in a remote area, through a chain of sub-contractors, each creaming off slices of the money both through margins and bribes, and yet when later someone went to the site to inspect the school, they found no building at all.

I heard about the $43MM gas station (similar gas stations were built for a half million). At the scale of government $40 million is a significant miscalculation but not scandalous. To know the bloat goes beyond that to a staggering $800MM, nearly a trillion dollars, is very troubling.

This isn't the problem. Sick as it may be, burning $800MM is a rounding error. The problem is in that EVERYTHING in government is horrendously wasteful. The results we are getting for the taxes we pay and the money we borrow are equivalent to pennies on the dollar. THAT, is the problem.

On topic of docker and multi-container, multi-machine orchestration... Is there a comprehensive "docker deployment for dummies" guide out there? For example, let's say I have couple web applications with their dockerfiles ready, a database and a redis instance on software side, and then couple server instances for it all to run on. Where do I go from there? What's the best process to package everything up and get it to run on those servers? Deliver updates to those applications, preferably in zero-downtime manner? I have a vague notion that my CI should be building the images, and pushing them to something called docker registry. But how are those secured? Is that a paid service? And what happens then, how do servers know to fetch and run the new version?

How is the multi-host networking implemented? Is there a dependency on a service discovery system? What service discovery system? Or are they using portable IP addresses? How are those implemented? Overlay networks? BGP? Or is it doing some crazy IPTables rules with NAT?

Will it work with existing service discovery systems? What happens if a container I'm linked to goes down and comes up on another host? Do I get transparently reconnected?

There's so much involved with the abstraction they're making that I'm getting a suspicion that it's probably an unwieldy beast of an implementation that probably leaks abstractions everywhere. I'd love to be proven otherwise, but their lack of details should make any developer nervous to bet their product on docker.

I don't quite understand the swarm & compose workflow for production. I'd rather use a declarative language to specify what the systems look like, potentially with auto-scaling, health checks to replace containers if they go down, etc. I don't want to run one-off commands to launch containers based on local instead of centrally stored configuration, run one-off commands to launch the underlying hosts and to scale to more instances (which then isn't persisted anywhere), etc.

I feel like I'm just not understanding the "docker approved" approach. Which is surprising because docker itself is so great.

The networking stuff seems interesting though, I'm very curious how the rest of the ecosystem will evolve to take advantage of it or not.

EDIT the missing content..:I mean currently it mostly is for the big users. There aren't too much things for "small" users. The big things like kubernetes, etc are really hard to configure / maintain, etc.I mean it's easier to maintain ansible / puppet / chef / etc... - scripts than maintaining a real "docker" environment. even looking at deis, flynn, openshift its not just run "this, upgrade with this".

after you setup the hole thing you need to create huge buildpack scripts or Dockerfiles or kubernetes configs or whatever.you just needed process isolation, now you build a infrastructure on top of a infrastructure.

Great to see this. I've been missing Docker Compose on windows. Playing around with swarm for some time now I was hoping for it to become production ready. Does anyone know if swarm can now reschedule failed containers to another host? Couldn't find this detail in the blog post about it.

I am really trying to figure out the ecosystem. I did some stuff with a single server but now as we need to move it to multiple servers we have Rancher, Weave, and so many others (kubernetes?). And now docker has integrated multihost networking so I am really not sure how to proceed.

What's the relation between Docker Swarm and Hipache? Is Hipache discontinued? Is Swarm built on it? Is it compatible?

Also, is it possible to make Swarm answer to the same client IP with the same backend server during a 'session'? This is very important for database applications where the sync after each write may take some time and you don't want the UI to show different states from different servers until the system has settled. Hipache AFAIK doesn't offer this, IMO the biggest downside of this.

Public Citizen has released initial analysis, http://www.citizen.org/documents/tpp-ecommerce-chapter-analy..., "The E-commerce chapter addresses a range of issues including duties on digital products, paperless trade administration, and rules on electronic signatures, net neutrality and data protection. The text also includes provisions limiting the ability of countries to keep data within their territorial borders.

any legal system that imposes limits on private sector data transfers to jurisdictions for the purpose of safeguarding citizens data against foreign government intelligence agencies, as was recently accomplished by the Court of Justice of the European Union in Schrems v Facebook Inc, 2015 Case C-362/14, could contribute to violation of Section A of the TPPs Investment chapter and be subject to sanction and heavy penalties through the investor-state dispute mechanism.

Article 14.17 prevents governments from requiring the disclosure of source code as a condition of import, distribution, sale or use of software or of products containing software while the Article excludes disclosure obligations in commercially negotiated contracts, it does not exempt source code disclosure provisions imposed by means of a software license.

As open source licenses are not commercially negotiated but rather imposed on others, there is concern that any attempt to enforce such licenses against third parties by means of the courts would amount to a violation of this Article, opening the country whose court system carried out such enforcement to heavy-handed penalties through the investor-state dispute enforcement mechanisms.

addressing cybersecurity breaches can require mandating the publication of source code so as to facilitate fixing of security flaws. The TPPs prohibition on such requirements could undermine security measures of this type."

You can peaceably gather in protest and maybe get a few sniping remarks on the nightly news, you can call, write, knock on your representatives door, you can donate money and time, but none of it will stop this treaty from being passed because those in power wrote it for themselves and will pass it for themselves.

If anything were to stop it it would be the wide disregard and disobedience of the illegitimate laws it supposedly creates.

This really doesn't seem that bad. It's mainly restricting what governments can do to restrict their people. In other words - keeping trade free. Isn't that kind of a good thing?

- Governments are not allowed to restrict where companies store user data.

- Governments can choose how to deal with spam, they don't have to adopt the US's CAN-SPAM law.

- Governments can't force companies to disclose their source code.

- Copyright term hasn't been extended to life + 120 years as earlier feared. Only to life+70 years which it already was in the USA anyway.

- Governments don't have to impose net neutrality. That's an issue in the US, but not everywhere else. And they still can if their people want. Such restrictions could easily backfire, especially when they have exemptions for a few hand-picked uses like VOIP and telemedicine. So what if somebody invents a new technology that also needs preferential treatment for latency or bandwidth?

- Governments aren't allowed to impose security restrictions on users as a tool to impede free trade. Does anyone really want the government to dictate how they do internet security? Or for foreign companies to be blocked in favor of domestic competitors?

I don't worry about stuff like this too much, or stuff about the UK wanting to do stupid stuff like ban all encryption. I believe the internet is going to become more private and more anonymous as time goes on. Eventually everyone will be using the equivalent of VPNs on machines/browsers that don't give out any identifying information unless a user extremely explicitly tells it to. Or perhaps something similar to Freenet will become much more popular. We're already seeing hardware (like the iPhone) coming encrypted from the manufacturer with seemingly no way for any government agency to decrypt it forcefully. Ad blockers and tracking blockers are more popular than ever. Firefox just today released an update to help prevent trackers.

It's just a matter of time - ISPs and governments and corporations will lose the ability to track their users outside of their specific platform, and many of the platforms we use today will be replaced with P2P alternatives that make tracking impossible and aren't "owned" by anyone. I am sure the governments of the world will be livid.

What confuses the hell out of me regarding the TPP - and maybe it's just because I'm in the HN/Reddit echo chamber on this - but if the TPP is so damn important to reigning in China in the 21st century or whatever, then why did they load it up with a bunch of unrelated antagonizing bullshit?

It doesn't seem to me that the intellectual property provisions of the agreement are all that important to the overall stated goals of the TPP. Yet they are so fucking regressive and antagonistic that there is some chance (I guess? Again, echo chamber...) that they will sabotage the rest of the agreement. After SOPA, etc., if it were me and I wanted to be sure that the TPP passed in enough Pacific Rim countries to make it effective, I would keep anything remotely like SOPA as far away from my precious treaty as I possibly could.

Instead, the IP portions of the agreement are basically the language that was in SOPA all over again, which pissed a whole lot of people off last time. It's really hard to take seriously the claim that the TPP is so important, when the people drafting it are including language that is pretty much guaranteed to stoke vigorous opposition, for reasons that are mostly orthogonal to their goals.

I've been using Ledger for almost nine years, both for my personal accounts and now my business accounts. I wrote some introductory articles about it, and further articles about how I use it day-to-day:

I've used ledger for years, but it's not a 'real' double entry system, despite what the manual says. It's more halfway between cash and double entry. The 'auto balancing' for example, which is sold as its big feature, doesn't make sense from a double accounting perspective - the whole point is to not have it be 'auto'. Other basic things are missing - the difference balance and profit/loss accounts, for example. It's fine for personal and small business accounting, but if you've had training in real accounting some things will be confusing (because of terms meaning something else) and don't expect your accountant to be able to work with your reports if you haven't set up your system in cooperation with him/her.

I wanted to be able to group all my assertions together. This means the assertion needs to be able to retroactively calculate the balance on a given date. ledger-cli's assert doesn't support this. hledger does, but, well, I'm not a Haskell guy and the whole Haskell ecosystem seemed...cumbersome...to install.

uledger supports a useful-to-me subset of ledger-cli, primarily nested accounts, multiple currencies and out-of-order assertions and transaction entry. It has a bunch of tests and a very simple web balance interface. My favourite feature is the accounting-equation balance feature:

as a side note: both ledger [0] and hledger [1] support time tracking/keeping. of all the various tools I tried to use for personal stuff it has been the most convenient one for me (it helps that I also use it for accounting) and I'm using it consequently from day 1 on! you can apply the same reporting/balancing to your time keeping as you do to your accounting.

Been using ledger for 9 years as well (hledger lately.) Never have a hitch. The only complaint I have is that I can't (easily) keep AR in sync with online invoicing software. minor though. Would love to have some git-style integration with my bank and other accounts. `ledger pull savings` or `ledger pull invoices`

There's another open source project worth checking out called BeanCount. The author, Martin Blais, originally wrote it in Python to serve similar double entry bookkeeping needs as those from the ledger-cli community. He seems to be continuing development of it, or at least porting portions of it (plugins?) to a variety of languages.

Is it possible to attach a receipt (say scanned PDFs) to a Ledger entry? I am interested in ledger, would like it to be the accounting system for my side business, a bit more bookkeeping feature such as the receipt attachment would be great for tax filing purpose.

Edit: I know that Ledger uses plain text for each entry. Though I kind of wonder how do you guys keep those receipts.

This is very interesting. I'm involved in an open source project to create a Ruby gem to extract bank data from multiple banks and offer it via a CLI and a Ruby API. We fetch the data either from bank APIs (which often we discover through reverse engineering of mobile apps) or simple web scrapping.

It's got a high hack and giggle factor, but obviously horrible usability for anyone off the command line - which includes your accountant/tax advisor.

Here in New Zealand essentially every start-up uses Xero - a SaaS accounting system that works extremely well and now has over 600k paying customers. It means one set of data, permanently in the cloud, and a host of integrations to other players.

Founders open up access at different levels - we can give read-only to directors and some investors, advisor-level access to accountants and bookkeepers and so forth.

Frankly it's part of the NZ advantage right now (120000 or so of those customers are here) - we seem to be spawning a large series of smart, low cost B2SMB SaaS businesses, who are not just following Xero's own path, but also using the tool to improve the way they do business.

There are a bunch of legit accounting packages out there. No need to re-invent the wheel or (in this case) clean the parade ground with a toothbrush. Off the shelf SME accounting package or if you really need it a proper ERP. Anything else is likely to end in tears.

Was reading somewhere that you're free to do on the fly updates of any interpreted code - so JavaScript is fine. Only compiled code can't be updated on the fly.

And in react native only the main framework is compiled, with the actual application running in a JS interpreter. So unless we upgrade the RN version, or start using a new SDK etc there will never be a reason to go through the App Store.

All interpreted code is sandboxed anyway, so there's no security risks involved.

Wow, a Microsoft product that doesn't reference other MS products! I'm so used to the 'click a button' in Visual Studio to publish your C# / ASP.NET code to access SQL Server in the cloud supported by Azure.

If you're looking to update React Native apps on iOS in a similar fashion (without going through the Apple Store), AppHub[1][2] looks to be tackling this as well. Not affiliated, just saw it mentioned in one of the React Native PRs[3] to help enable some of the functionality.

This is great. Between updates/hydration and browser APIs like web push notifications, camera, geo etc.. the app stores are just glorified CDNs. Building 'natively' for these platforms offers few advantages, but adds overhead, deployment delays, and vendor-specific concerns. Choosing to release on an app store should be a config setting, not a business model.

I've been looking at trying out AppHub https://apphub.io/, which is comparable to CodePush. I'll have to try this.

As it is, both are definitely useful for better handling upgrade process with users. A couple drawbacks with mobile apps are that it's harder to hotfix errors, and you sometimes have to support older APIs for longer. With a web app, a user can visit the site and the entire app bundle can be downloaded after a cache bust.

I wonder if Apple will ever release an API that allows apps to do this kind of hot updating natively. I could imagine breaking up an application into multiple containers and then orchestrating some kind of update process by calling out to the system APIs.

At least then it would not compromise the security of App Store / Test flight. As more meta-data was extracted/tagged with these containers you could imagine Apple reviewers start to care less about the code inside the container and more the interfaces (does it use health kit? apple pay? etc) and whether they are likely to be reviewed again. Could also look at size of binaries changes and things like that, or perhaps at LLVM byte code level for more detail.

Ionic has a product called Deploy which is currently in Alpha (production use not recommended).

They do a pretty good job explaining binary versioning, which is eluded too with CodePush but not in detail. Basically, it might seem too good to be true, and it is. You can push small changes up to the app store, but still have to re-submit for any binary changes.

The Cardova tech is very exiting and advancing fast. Today it's almost as easy as making a web app.

Your users need to hack their OS to be able to use your app though ... OS's need to find a way to contain apps that don't need full system access. Maybe by higher level user-mode so that apps need to ask for permission before using API's.

""As React Native on iOS requires a Mac and most of the engineers at Facebook and contributors use Macs, support for OS X is a top priority. However, we would like to support developers using Linux and Windows too. We believe we'll get the best Linux and Windows support from people using these operating systems on a daily basis.

Therefore, Linux and Windows support for the development environment is an ongoing community responsibility.""

Which is to say "At Facebook we like Macs, we have Macs. Maybe someone else who likes Windows will build for Windows and Linux. Go ask them to do it."

I was recently looking for a cross platform development environment and was very interested in react-native but I completely lost interest after reading about their attitude to non-OSX support.

This is an interesting deal. There was an article posted on HN a while ago that talked about investors pouring money into publicly traded companies who directly compete with private tech companies who have insanely high valuations.

Based on a recent experience, Homeaway's search for properties available between two dates is pretty much a non-working facade.

The UI is all there... but after after entering credit card info etc. and attempting to book several different places, only to then receive emails stating that they were not available on those dates despite showing up in search as being available, and "please call us directly because maybe we have something else..."

It felt like shopping for NYC apartments on Craig's List... "we don't have the one you want but what about this one?"

HomeAway has been positioning themselves to be bought by Expedia for about 12-20 months.

If you are in the travel industry you'd not actually think this news is a suprise.

The timing is also 3 days after the close of the VRMA (a large travel exhibition for vacation rental managers this year held in New Orleans)

I had the severe misfortune of going to last years VRMA. I travelled over from England... anyway, the VRMA (which is suposed to be a somewhat neutral organization looking after managers best interests) decided in their infinite wisdom to give the spotlight to HomeAway's COO (Brent) and CEO (Brian).

The keynotes speech shocked everyone!.. Essentially, Brian Sharples turned an event that was suposed to be generic advice for the betterment of the entire industry into a marketing pitch about HomeAway and it's vision for the future. This wasn't abstract advice.. this was specific details about how managers will use HomeAway in the future. This was the KEYNOTE of an event that 800 managers paid several thousand dollars to attend.

Anyway, he told all managers that within 24 months they will be on "Instant Book". This is a shock because the transition to Instant Book signalled that they are trying to phase out the listing model.. (The listing model has always been in favour of managers because it meant that they can handle their own enquiries and bookings/payments). Pushing people down the instant book route has always visibly been because they need the numbers to be attractive to Expedia and because after increasing the listing modules from $300 to over $1000 per listing (this is for platinum) they need more ways to increase shareholder value.

PS. On return back to England, the forums and community boards were in uproar about this. Our company paid to get the recordings of the event.. guess which speach out of 48 different lectures and presentations were missing... the keynote by Brian Sharples.

It's actually obscene that we were paying over $100k per year to HomeAway for platinum listings and then users on a basic listing were being ranked higher than us because they had "Book in Now" enabled on their properties. This is all because transparently, they need the bookings to be processed through their system to prove they are a valuable purchase for Expedia. (It's been no secret that everyone has known it's going to be Expedia buying HA).

Further proof/conjecture(?), is that for a long time they stopped providing us emails of enquiries made for our properties through their system. This is as they assure the community because of "phising" fears etc...

I've long said, the best thing that can happen to the industry is for Expedia to buy HomeAway. I think this is going to be a really good thing. Expedia is going to force book it now, which isn't practical for managers who can't handle the instant bookings (due to real time availability and calendars issues).

Bit of a rant. I've never hidden my criticism of HomeAway. They've always been the gorilla in the room who force you to play by their rules and demand control of the entire booking process. (Keep in mind that they are long before AirBnb. They might not be as glamourous, but HomeAway and VRBO have done amazing things for the short term rental travel industry.... which can't be forgotten.)

I used ownersdirect, a subsidiary of homeaway to book a villa for a holiday - it was fantastic but the experience on their website was poor. It looks like the homeaway site itself is much closer to the airbnb experience though - messaging on site instead of by email, payments on site instead of through PayPal.

They seem to charge to make listings though, and to rank listings at least in the first case by how much you paid (subscription level) and to feature listings solely for paying more. I expect airbnb has much better sorting by not having that kind of pay to come up top/be featured mess.

Many cheap brick buildings from 100 years ago are gone, where the high quality ones are more likely to remain.

Also, steel Lintel are often used over windows in brick buildings. They don't last as long, but are fine for cheap construction that is not expected to last. AKA the kinds of building that are unlikely to be around in 100 years.

The quality of modern construction (not necessarily brick) is a big pet peeve of mine and something I keep looking for answers.

The way I see it, some time right after WWII, people in US suddenly decided to live in a poorly built and ugly looking dwellings regardless of their income level. It is especially easy to see when looking at NYC buildings, some rentals are even explicitly advertized as "pre-war".

Even looking at materials used for construction today, I can't figure out why everyone thinks that drywall-on-sticks is acceptable? Literally every multi-story home I've been in felt hollow and shaky if you jump on the 2nd floor because there's no mass anywhere.

This is clearly not a cost issue, I have taken tours looking at brand-new multi-MM homes in Austin, TX just for fun. While they all had top-notch appliances, finishes and a gazilion of square feet and bedrooms, they were also built using the same "toy" materials and used generally similar architectural patters as middle class homes.

Is the brick actually holding up the building? In many modern buildings, it's just a veneer, about 1cm thick. The steelwork holds it up. The new Box.net HQ in Redwood City looks like a brick building, but it's not; it's steel and concrete with about 1cm of brick on the outside.

There's some nice work being done with brick today.[1] Some of this is gentrification, built to fit in with existing brick buildings, or to imitate them in new construction. All those examples have recessed windows, although not structural stone lintels. Many lintels today are precast stone and decorative; steel is carrying the load.

Robotic bricklaying is here.[2]

In earthquake country, you really don't want tall brick buildings where the brick is structural. San Francisco is very anti-cornice; in even minor earthquakes, overhanging masonry cornices tend to fall off and kill people.

> "Part of the blame, I feel, rests at the feet of the Modernist movement a movement that idealized the cube and disdained roof overhangs. Modernist architects were ignorant of the entire concept of moisture management. The fact that thousands of Modernist buildings suffered water entry problems did little to deter architects from falling in love with Modernism and Brutalism. This tragic love affair contributed to the withering of age-old skills."

I believe this is a fair assignment of blame. It's analogous to the complaints people have about some modern web design - total focus on appearance at the expense of usability or technical quality.

The architects produce buildings that look good on paper, because that's what wins the contract. The next client isn't going to go to their previous building and do a customer satisfaction survey on the users. Nobody ever does.

Edit: Incan stone construction is one of the great examples of ancient 'over'building: precisely fitted hand-carved stone, good for five centuries. And Rome has plenty of 2000 year old brick buildings, especially the Pantheon dome.

Essentially, Holladay is complaining about the lack of water detailing, which is super important to the durability of brick. It doesn't need to be as fancy as his first example, as the link above shows, but you need someone to care about it.

Essentially, the issue building science specialists have with modernist architecture is that it often favors geometric simplicity over proper protection of the materials. Brick can withstand water, to an extent, but without drip edges and other details, it's quickly going to get damaged and ugly.

Note that building science nerds also tend to hate bumpouts and complicated rooflines because the air tightness and insulation details are hard to get right (and usually they just aren't done properly).

How do you control it, a transistor and a oscillator? Nah, just an Arduino.

So with micro controllers we loose a lot of applied knowledge of analog circuits and I suspect something similar is going on in architecture. The hours a architect spends on learning about modern materials is not spend thinking about brick works, and consequently a modern architect is a lot worse at building brick buildings than a architect one hundred years ago.

The author admits to the limited comparison. I have a few other reasons:

1. We actually care if our buildings are insulated now and withstand earthquakes.

2. People are so wealthy here they don't have to get it right, they can always pay to do it over again. They don't care to put in the research to make sure it is done right. [0]

3. Corollary to #2, we don't need our buildings to last 100 years because we expect the area to be overtaken by increased density by then?

[0] CSB: Person tells about friend that bought house in Las Vegas just prior to 2008, has enormous cooling bill because house doesn't have overhang to protect southern exposure from the sun and HOA won't allow her to alter it. Asks why the government doesn't protect her. I ask why she didn't do a little more due diligence before spending $300k. He hadn't thought of it that way and considers it.

It is, probably, a global shift from slow and costly perfectionism to quick and cheap "good enough" (in worst possible sense of "getting shit done"). What they call a cost cutting or " optimization " is merely a form of cheating and concealing inability to and unwillingness to provide quality.

BTW, the most of buildings with have collapsed in Kathmandu around the new bus stand were these which has been built quickly with cost cutting (chap, thin steel bars, thick cheap cement layers between bricks, etc).

The small house I live in is 100 years old, and built of brick. It is a tremendously (over-)engineered building: two layers of brick even for interior walls, exterior walls are around 1 ft thick, and four massive chimney-breasts (although two were sadly removed by previous owners).

My theory is that they just had enormous amounts of cheap labour 100 years ago, and you'd never be able to build such a building today because it would be far too expensive.

...but comparing a bank (a building that in 1891 had to LOOK expensive) and student flats (a building that has to BE cheap) results in the rather underwhelming discovery that because they had wildly different budgets with completely different aesthetic aims, they ended up with different built qualities. Shocking, isn't it?

If they want to make an apples-for-apples comparison, the author should come to the UK and compare our 1890 semi-detached with any post-70s new-build. There are certainly ecological issues with the older building (that are expensive to retrofit past) but the quality of building and workmanship is drastically better in the older houses.

And [at least in the UK] this isn't a case of crappy houses made of sticks falling down. With the rarest of exceptions, there is no "survivorship bias".

Alright, too many mentions of survival bias and too much skepticism. There're large parts of some cities that are almost exclusively constructed from brick. It might be bias or it might not be.

It's clear that nowadays buildings are made cheaply. For example the construction of the regular American suburban "stick" house is just the cheapest and the quickest way put up walls and a roof. What you get is something that's badly insulated (both from weather and sound) and just isn't very strong, and the technique is getting traction in other parts of the world too, replacing concrete, beams and brick.

If you are curious about this topic you should bring it up with some architects from different countries. From my understanding there are indeed regressions in building quality in some countries but it's not entirely clear what causes it other than decisions that have been made at the time.

In particular the brick did not decrease in quality but the way they were built did. For instance for a while people paid less attention to protecting buildings from water damage to achieve more interesting designs.

A particular crazy architectural style that suffers a lot from this is British brutalist architecture.

This sort of "commodification", for lack of a better word, is visible in many areas, I think. I was thinking about this lately in relation to tools. It's great that you can now get tools for unbelievably low prices compared to, say, 50 years ago. This is a big win in many instances, because a shitty tool is often better than no tool at all, and it opens up access for many who would never be able to afford the old tools.

The flip side, however, is that it's devalued quality. It's remarkably difficult to actually find high-quality stuff these days. Even what used to be high-end brands have been bought up by some conglomerate that is now selling cheap Chinese versions under the old names.

There's still a market for quality, of course. You don't use tools from Harbor Freight when building rockets, but you'll never see those in any store you visit and they're likely to be priced far outside the reach of a normal person. It's like the middle ground has been lost, most stuff is cheap and low quality and then there is this small high end of really expensive stuff.

Buildings from 100 years ago that still stand today necessarily must have been those that were most carefully constructed or those that have been thoughtfully preserved. This creates a biased comparison between the highest quality buildings of the past and an average (or perhaps worse than average) building from modern times.

This article could conclude that not all brick buildings today are superior in construction to the highest quality buildings built 100 years ago, but making a more general statement would be a fallacious extrapolation.

As I sit here in a rather old house (1920's) though never very expensive house I would say there are good and bad things about it.There's very little insulation, the heavy exterior is only a couple inches from some sort of plaster board on the inside causing many electrical boxes to be shallow. They really didn't care about the price of heating the house when they built it or maybe they did it to be cheap, but in any case that cost has gone way up.I do like an actual wood floor and heavy beams used in the construction of the basement and such.The plumbing is sometimes far more creative then is easy to fix now and I've found myself just cutting sections out and replacing it. I actually have some pipes made from lead going out so they can be bent in curves. Terracotta drains in the yard probably need to be replaced.The wiring was the first thing I replaced, knob and tube without grounds was just scary. O, and since the entire house was painted (yes they painted the formed bricks, not exactly like in the article though still bricks) with lead paint I certainly wouldn't eat something grown close to the house. Also I have steam heat which is interesting all by itself. So all in all, a not super expensive old building tends to be a lot of work these days and I would rather have a newer even if flimsy construction next time. I think I'll go along with the commenter who said the surviving old buildings are a bias because they are generally the best of the best of what was built in that time period.

Survivorship bias and structural integrity aside, I suspect it's all about the cashflow.

The bank was built in 1891. The FDIC wasn't around until 1933. The appearance of wealth and institutional stability was a very important marketing tool to late 19th-century bankers wanting patrons to trust them with their money.

The dorm houses kids fresh out of high school who can't / don't want to live off campus. I'd guess the building is attractive enough to most people to avoid negative attention, and -- as the article indicates -- it obviously isn't swaying money away from Dartmouth, so why bother?

On my way to work, they're building this wonderful brick sidewalk. While still just a side walk and not a building, it's remarkably well crafted and detailed. I think we CAN do it, we just choose not to, economically.

I think somebody would rather save a lot of money and build something "good enough" these days than invest in craftsmanship. Also, I bet you don't have to look far to find a bunch of counter examples in modern times.

More broadly on this subject, I recommend that every software engineer read the book How Buildings Learn, by Stewart Brand. It's a book about the lifecycle of buildings, design compromises, and how buildings are altered and repurposed over their life. It's a fascinating way to think about software as well.

There are some other great explanations in this thread (survivor bias seems very plausible), but I'll throw out another based on my experience with other construction trades: economics.

If you look at old buildings, you tend to notice that they also have a lot of intricate plaster-work that you never see anymore. Why? Because it used to be much cheaper to hire skilled labor than it is today. You can see a similar trend every year in the Christmas Price Index, which tracks the cost of the items in the 12 Days of Christmas song. The prices of goods tend to stay stable, while the price of labor tends to increase significantly.

For our brick buildings, I tried to find the best numbers I could, and here's what I came up with:In 1894, bricks cost about $5.70/thousand [1], which is $165.51 in today's dollarsToday, you can get bricks wholesale for $220/thousand - and that's what I found online, I imagine an actual wholesaler is less. [2]That's an increase of about 37%

For the bricklayer, the average wage in 1891 was $4/day, which is about $110 in today's dollars [3]

Today, the median bricklayer pay is $24/hour [4], which is $192 per 8 hour day.

That's an increase of 75% in the real wages of the bricklayer, and it means that the rate labor costs have increased is double the rate of material costs.

In 1891, it may have made financial sense to pay for a bricklayer to make intricate, high quality buildings. In the past few decades, it's likely that's no longer the case.

Where are brick buildings being torn down left and right? I challenge the survivorship bias argument as where I live in the US east coast multiple cities push for historical status on buildings past a certain age in order to retain character.

One is pre-stressed concrete. This has replaced I-beams for highway bridges. The problem here is how do you inspect them? With steel I-beams you can use your eyes. With concrete maybe you need some kind of ultrasound equipment to check the tension cables? Do you trust those responsible for long-term maintenance to do it?

Another is engineered wood I-beams for houses. Basically these are floor joists made out of plywood. How long will they last? What happens if they get wet? If there is a fire the house is gone because the floor joists will be ruined by the water to put the fire out.

There were certainly mistakes in the past as well. One is building with cast-iron beams. They look nice, but they crack.

A lot of examples of "superior" technology from 19th century turn out to be expensive stuff created for upper class, compared to mass-produced items of today that are affordable for the general population.

Bank buildings built before 1929 tend to be massive masonry structures with greek columns, marble, etc. Ones build later tend to be cracker boxes in strip malls.

My theory is that the older banks needed to impress their clients with stability, conservativeness, safety, responsibility, etc. After FDIC, customers looked to the government for that, and so banks no longer needed to spend the money on the building.

The massive, glittering vaults are sadly gone now, too. The money exists as data on a server somewhere, little need for a vault.

What is the point of comparing a high class building with a low class building? They have not been constructed with the same budget and not to achieve the same goal. Clearly a dorm aims to lower the cost to an extreme, exactly the opposite of what a bank would do.

In Somerville, MA there is an area with some converted loft buildings including new construction loft-style buildings. The new building walls use masonry blocks (large grey blocks) for structure, with decorative brick on the outside. On the inside, there is a cavity for insulation and wiring, and an interior brick wall.

The masonry work is excellent, and is probably the best way to create a wall that looks like brick inside and out yet meets modern insulation demands. But it is a gratuitously expensive construction technique to try and make a building look old and classic. If you don't value that particular aesthetic, you would select other materials. If you want to pander to that aesthetic but don't have a premium loft budget, you might end up with crap like that dorm.

You can see the results of this all over NYC. A building down the street from where I live has had scaffolding around it for over 2 years as they replace the bricks (building owners have to test the bricks ever 5 years in NYC if their building is over 6 stories). This building was build in the 70s. I have no idea when they are going to be done.

Because brick has become affordable for the common man and building techniques value speed? The average person building a house in 1915 would have a house built with wood siding because brick would have been out of his price range, even with the slave labor wages and lack of sufficient building regulations.

> The move potentially puts Mexico at the forefront of an international movement to decriminalize drugs despite a decade-long militarized crackdown on drug cartels which has cost the lives of around 100,000 people.

I never thought this would happen so quickly. Latin America has a drug problem that holds all development back, the problem is that drugs are illegal. I'm happy that Mexico is being so brave. viva Mexico cabrones!

Ok, let me try to clarify, because lots of nonsense have been spoken here already.

Mexico's Supreme Court has no right to say a thing with regards to US law, US policy or US Constitution. The article is talking (very briefly) about Mexican Constitution. One would think that was obvious, but apparently even at Hacker News, gringo's arrogance knows no limit. We do have our own laws and institutions in other countries in case you never have bothered to notice.

Second, it is a very interesting legal case which the article does no justice to (it rather gets lost reporting on the war on drugs, and the posture of conservative elements in society). There is this group of activists (SMART) that made a request to COFEPRIS - a branch of Mexican Government that is roughly equivalent to NIST - so that cannabis can be produced, stored and consumed with no profit motive. This request was obviously rejected, which is what SMART intended.

Since there was a decision of the government that affected their interests, it was possible under Mexican law to demand a "Jucio de Amparo" which can be roughly translated as a "Sanctuary Trial" and it similar to suing the government but not quite. IANAL, but the bottom line as far as I know is that you can demand the court to evaluate and interject decisions from other branches of government if you think your rights are being violated. The SMART activist group did win that trial.

What you are seeing talked about is the last appeal to that trial, which was ruled by the highest court in the country, and which the activists won again. The end result is not legalization, but undermining of the Mexican Government - and in particular law enforcement - ability to crack down on marijuana users with possession charges. If is of course open to debate whether that will benefit society at large or just some interest groups, or who those interest groups might be.

In the long term, this also creates a precedent that might or might not result in the legalization of soft drugs... but it is too early to tell at this point. At the very least the subject, which was taboo not that long ago, is being openly discussed now.

What shocked me about this article was the poll showing 77% of people in Mexico are opposed to legalization. Why? Social conservatism? Do they not understand that prohibition benefits the cartels and funds their violence?

"If people let the government decide what foods they eat and what medicines they take, their bodies will soon be in as sorry a state as are the souls of those who live under tyranny." --Thomas Jefferson

Interesting caption to one of the photos regarding how the temperature of the environment has changed:

>Axel Lindahls picture of Engabreen from 1889 shows the foot of the glacier, where there was only ice, glacial gravel, water and bare mountainsides in a seemingly cold and hostile landscape. Now, more than 120 years later, the valley has become far more fertile. Birch forest, shore meadows, willow thickets and marshland have established themselves, while the glacier arm has retreated far back up the mountainside.

Idea for computer-imagery/ML people: a method to realistically recolor old photos by sampling from new photos of the same place, with some caveats (for eg. the object must have a corresponding object, etc)

Obviously they're not going to change over a mere 120 years, but the fact that you can see and touch the exact same spot as how many other thousands of people across the generations - that's the really interesting thing.

1. Their magazine: Almost all magazines have been replaced with websites. Their website dramatically undersells their content. Look at it, really. It's basically like clickhole, but with stories of climate change. The dramatic photography of the magazine is only coming across in about 50% of the photos and the headlines are all clickbait format.

2. Their television presence includes shows like "Drugs, Inc." whose primarily job is to scare old people with re-enactments of drug crimes. Who would pay for that "value"? (I guess people who watch police shows? but what does that saturated market have to do with their brand?)

3. Their youtube stream is a massive quantity of short, low-quality videos. I subscribe and only watch about 1 in 50 of them. Another problem with their videos is so few have narration which I feel is a key feature of travel and wildlife shows.

4. They haven't handled outreach to a younger generation. With all the urban young people (esp. women IMO) who love to travel the world with disposable income (no families, marrying late), NG has no selling relationship with them.

5. The global geopolitical situation is more interesting than ever with worldwide communication, but I don't see NG addressing that. Maybe they are - somewhere? - but their marketing isn't penetrating.

I feel like they could turn it around if they primarily address the youngest generation - perhaps get more involved in the travel and outdoor supplies markets.

Before we blame Murdoch, I think it's worth remembering that NatGeo was in a bad position when it was sold off to Murdoch...that's why it was sold off in the first place:

> The magazines domestic circulation peaked at about 12 million copies in the late 1980s; today, the publication reaches about 3.5 million subscribers in the United States and an additional 3 million subscribers abroad through non-English-language editions. Advertising has been in steady decline.

Just because the layoffs are happening as NatGeo becomes part of Murdoch's empire doesn't mean that this was a greedy, self-serving move, and not one that was a long-time due and for which Murdoch gets the recognition/blame for, likely in exchange for a purchase price he was willing to accept. Would these layoffs not happened if NatGeo hadn't managed to be sold off as it was in decline?

>Several people in the channels fact-checking department, for example, were terminated on Tuesday, employees said.

What a ridiculous line. "For example". It just so happens that the example they chose to name and print is just right to get the people up-in-arms about Murdoch cutting the fact-checking department, while in reality people from many departments are being cut. We've seen it in this very thread.

>In addition to the layoffs and buyouts, the National Geographic Society said it would freeze its pension plan for eligible employees, eliminate medical coverage for future retirees and change its contributions to an employee 401(k) plan so that all employees receive the same percentage contribution.

Of all of it, this pisses me off the most. Because who needs a retirement? Thanks for your years of hard work, here's a 401(k) fucking income supplement. Oh, and no medical coverage for you. Have fun working until you die.

"Please watch your inbox for important information about your employment status tomorrow. [...] Looking ahead, I am confident National Geographics mission will be fulfilled in powerful, new and impactful ways, as we continue to change the world through science, exploration, education and storytelling."

Why do CEO-types feel the need to write these bad-news emails like they were some kind of press-release?

So on a somewhat unrelated note, this is what I fear Dell will do to EMC & VMware.

Dell's background is cheap computers. EMC service is pricy but top-notch. The engineers there have been nothing but fantastic to work with and I fear with this acquisition that this is all going to go away because Dell can't get the value proposition out of their DNA.

After for subscribing for 20 years I fell out with NG after repeated attempts to renew with a credit card simply failed. They wouldn't even let me send them a check. I found it to be a curious indicator of decline.

A year later when I was considering trying again I came across a racist historical re-enactment video of theirs depicting medieval Malian sultan Mansa Musa as a grunting savage. Just not feeling them any more.

how does a nonprofit 'sell itself' to a for-profit entity? my understanding of how nonprofit charters work is that when a charitable organization decides its operations are no longer sustainable, it must be 'taken over' by another nonprofit or else simply dissolved altogether.(edited for concision)

Terribly depressing. I've been buying NG for a decade and now have a whole row in my bookshelf of yellow spines. It is very disheartening to think that something as political as Fox and Murdoch can have 73% control of it and presumably influence it editorially, as well as using it to uplift and give weight to the Fox brand :(

I can't hear a difference between 96khz/44khz in it's raw form. However, I can tell the difference from effects in audio mixing. The extra detail can really make a difference in how well an audio effect VST works.

I have a 96khz/24bit interface that I use and ATH-M30X headphones, and I can tell a difference between at least some 24bit FLAC files and 16bit highest-quality-possible MP3s. I was mixing my own music and the difference was quite obvious to me. The notable thing was that drum cymbals seemed to have a bit less sizzle and such.

Now that being said, if I hadn't heard the song a million times in it's lossless form from trying to mix it, I probably wouldn't have noticed, and even then it didn't actually affect my "experience".

I'm one of those guys that downloads vinyl rips as well, but I do that mostly just to experience the alternative mastering, not that I think it's higher quality or anything. (though I have heard a terrible loudness-war CD master that sounded great on vinyl with a different master)

This article hits close to home: before I became a programmer I worked as an audio engineer at a fledgling studio in my hometown.

The amount of misinformation / junk-science in the audio world is preposterous. There's a religious-cult of an industry that feeds off the ignorance and placebos of its participants. I have many friends who swear by their What.cd 24/192 FLAC vinyl rips and spend hundreds of dollars on audiophile AC wall outlets. Not to say that there are no differences in high-end audio equipment, but so much of what's "good" is subjective.

24/192 lossless is a digital Veblen good; some people will pay more for it (and/or the HW to play it & store it), and almost all of them will enjoy it more, if only because it costs more. Whether it actually sounds better is rather tangential.

People today are often amazed when they listen to CD or turntable content through 70's era crossover speakers. Back in the 70's you'd have a stereo with 2 "speakers" that each had 3 subspeakers for a total of six speakers. The fad today is to have 5.1 sound with a single driver in each satellite, also a total of six speakers. The spatial resolution increase is good for movies, games and TV but surround sound in music is marginal. An amazing number of old "classic rock" recordings were done in quad and anything by Donald Fagan will sound pretty good w/ Dolby Pro Logic, there are some more recent Bjork recordings, but almost everything is mixed for stereo and what you loose in frequency response is not compensated by anything, except perhaps the ability to produce more volume with more speakers.

First, let me state that I believe that CD audio, played through a modern DAC and quality stereo equipment is pretty much the pinnacle of home audio listening. That is to say, I think 44.1kHz 16-bit PCM audio is plenty good and I'm in no rush to replace my CD collection, nor do I think significant investment in higher bandwidth audio (for playback, mixing and mastering are another story) buys you much.

That said, there's one thing the article does not address and that is "beating", or really inter-modulation distortion from instrumental overtones.

Instruments are not limited to 20-20kHz. They can have overtones well above this range. Additionally, note that short pulse-width signals, i.e. transients, like drum strikes, especially involving wooden percussion, can have infinite bandwidth. (Not really infinite, but pulse-width is inversely proportional to bandwidth.

In a real listening environment (i.e. live performance) these overtones have a chance to interact with one another in the air. It is possible that these overtones may beat with one another and cause inter-modulation products in the audible range. For an example of this, play a 1000 Hz tone through your left speaker, and a 1001 Hz through your right speaker. You will hear a distinct 1 Hz "beat". The audibility of these are largely dependent on listening position and amplitude, but it is possible to occur with instruments. Since most recordings are done using a "close mic" technique (placing the microphone very close to the source) the interactions such as this are never recorded.

However, if full bandwidth of the producing instruments is preserved, these interactions of the overtones can be reproduced in a playback environment given equipment having a wide enough bandwidth and degree of quality.

I have a pair of Roger Sound Labs studio monitors for my speakers at home. I got to look at their insides when a technician was replacing a blown midrange speaker (they have a "lifetime" warranty, however that warranty expired when RSL did). Looking at the cross over filter network I could see a network selecting for frequencies > 20khz and it was shunted to a resistor. I asked about it, and the reponse was exactly like the authors, by filtering out signals higher than the tweeter could reproduce, they improved the listening experience.

It made sense to me, and I love how the speakers sound. Understanding is not inserting distortion makes even more sense.

In order to decimate a signal to 44.1 or 48khz, and preserve high-frequency content, high frequencies need to be phase-shifted.

This phase-shift is similar to how lossy codecs work.

For what it's worth: I'm a big fan of music in surround, and most of it comes in high sampling rates. When I investigated ripping my DVD-As and Blurays, I found that they never have music over 20khz. It's all filtered out. However, downsampling to 44.1 or 48khz isn't "lossless" because of the phase shift needed due to the Nyquist-Shannon theory.

I still rip my DVD-As at 48khz, though. There isn't a good lossless codec that can preserve phase at high frequencies, yet approach the bitrate of 12/48 flac.

From my experience, what matters more than sample rate is 24 bit vs. 16 bit sampling in the recording/production process. Using heavy compression and EQ can mean that very quiet sounds can become louder, in this case 24 bit recording is ideal. Sample rate wise, anything above 40khz is fine for most ears (I've probably lost a few khz in the upper range anyways) Another note is that most converters operate at a multiple of 48K, so it makes sense to use 48/96khz if you are recording. It all comes down to how much disk space you have, and want to use up.

I just recently purchased Izotope Ozone 7 advanced. One feature it has is "codec preview" which lets you "solo" the codec artifacts for MP3 and AAC format. Even at high but rates it's amazing how swishy bit reduction sounds. It also made me realize what I was hearing with mp3s was artifacts from compression. That said, it's not unlike tape hiss or vinyl noise. In fact I think it can have its own charm and in some cases make the music sound more full. It's also probably why 24/192 digital audio can sound so "cold" or lifeless.

Because digital filters have few of the practical limitations of an analog filter, we can complete the anti-aliasing process with greater efficiency and precision digitally. The very high rate raw digital signal passes through a digital anti-aliasing filter, which has no trouble fitting a transition band into a tight space.

I always thought digital anti-aliasing filters werecreatures from a fairy-tale world. Much talked aboutbut no one has ever seen one.

My understanding: If you have a an analog filter of a given steepness the only way to further reduce aliasing effects digitally is oversampling. Or less steep (cheaper) analog filter plus oversampling is the same as steeper (more) expensive analog filter. People tend to say digital anti-aliasing filters when they really mean oversampling.

"24/192 music downloads make no sense" seems to be a thoroughly researched and carefully written article. It explains oversampling very well, possible confusion with digital filtering (anti-aliasing or not) is out of question. But then it goes on to talk about digital anti-aliasing filters, which makes me afraid I could be wrong.

Why do high quality DACs clearly sound better then? And they sound better with better files. Maybe it really is all in my head but I mean listening to a 20000 hifi the other day (vinyl) really just shocked me.

I was listening to Marvin Gaye on my friends system and I could hear that there were several different backing singers all moving and at different distances from the microphone.

Are there any double blind trials anywhere of Vinyl/CD/24-192khz with super high end hifi systems? Mostly I see people suggesting that these tests are performed from the phono output of a mac with a pair of average ear buds...

Some [consumer] digital low-pass filter can benefit from higher sampling rates, leading to an overall better representation of the analog signal up to 20kHz. But there are diminishing returns as the filter "folds" the octaves above 22kHz; A rate of 96k for certain lowpass filters is better than 48k, but at some point there's little (if any) benefit by going to 192k or 384k. For recording studios, go as high as you can in both bit-rate and bit-depth. Especially when you're processing the signal "in the box". Give the software as much data as possible to operate without introducing errors and artifacts. There are diminishing returns there as well, but RTFM for (for example) UA gear and software and you're good to go.

So, no point in 24/192 because it makes no difference in playback... but having lossless downloads is important in part for enabling remix culture? There's a bit of a double-standard here. Maybe I can't hear 24/192 audio, but isn't it better input for sampling?

I love the idea the author mentions in passing of a dedicated speaker assembly for ultrasonics. This seems like something that could be a huge margin business, and the parts costs would be as low as you wanted.

> Actually, there were no compromises made. I found Richard to be an absolute delight to talk with. We discussed the architectural history of Pompeii, admired his reading library, his tea collection... :)

> I think many people misunderstand his devotion to freedom as being unreasoning in his views -- as I had, not just a few month ago! On the contrary: I proposed several Emacs-related ideas that I expected him to balk at, only to find he happily considered everything, even suggesting further improvements. At no point did I ever get the feeling that I was speaking to a closed mind.

> I only wish I lived nearby so I could spend more time with him. He is truly an amiable fellow. I have no worries about our ability to find a common path in future, if issues that threaten his goals for software freedom arise.

I've been following the discussions on emacs-devel lately and he's been showing a lot of enthusiasm for what he has in front of him, and given John's previous endeavours (use-package, eshell) I really think emacs has found a worthy successor to Stefan Monnier.

If look for John on Youtube you'll find some great demonstrations of Emacs that he did along with Sacha Chua. I find his enthusiasm infectious. Excited to see what directions Emacs takes in the next few years.

What excites me most about the selection of John is that he's interested in bringing modern IDE features to Emacs without transforming it into Eclipse or Visual Studio. I think that he'll show us just what an excellent tool Emacs can be even in the 21st century!

No explicit ban on encryption, but the existing RIPA obligation to decrypt when you have the capability and are made to. Potential madness in the "Equipment interference" section, although the bill claims this is already authorised under different legislation.

The Bill uses "communications data" to mean what we would call "metadata", ie everything except the contents.

"Equipment interference allows the security and intelligence agencies, lawenforcement and the armed forces to interfere with electronic equipment such as computersand smartphones in order to obtain data, such as communications from a device.Equipment interference encompasses a wide range of activity from remote access tocomputers to downloading covertly the contents of a mobile phone during a search."

8:40 'Security risk' of storing communications data"A new law to govern how police and intelligence agencies and the state can access communications and data will be published today.

Preston Byrne from Eris Industries, a cryptographic communications company which is withdrawing from the UK because of the proposed law, says the government is going to be tracking metadata which is essentially "a map of what you're thinking".

He warns the data could be compromised - citing the recent TalkTalk hack - and says this could lead to blackmail. And he argues that criminals and terrorists" don't use normal communication channels" so only the law-abiding people will be affected by the bill."

Preston Byrne has a point.. even common people are using VPNs and TORs. How come the terrorists bare their communications for surveillance?

The BBC is being a good state mouthpiece today - the fact that they're quoting May as saying it doesn't hold previously contentious matters (I.e. Breaking encryption) is disingenuous to say the least. The bill will say that "unbreakable" encryption is illegal - which means all encryption, as if it's breakable, well, it's not really encrypted, is it.

Never mind that this is totally unenforceable. I could write up a one time pad with pen and paper. Most won't. Crooked cops will sell data. They'll blame "hackers".

You only need look at the talktalk debacle to see how incredibly warped this govt's views are - they haven't arrested anyone at talktalk, who are tge ones who had such poor infosec that script kiddies could blow them wide open. Instead they're arresting children.

Oh, and I'm seriouslt considering redomiciling my company - we only contribute a few hundred million quid to the UK economy.

To me (a UK citizen) this is like the government tracking the title and author of every book I read, "but don't worry, not the contents or page numbers you looked at". The idea this is any meaningful barrier to finding out what you're really up to is ridiculous. Phone metadata is one thing - and still highly revealing - but much of the web is public! It's enough to make me think twice about where I browse, wondering "if I ever got challenged over it, how will it look that I browsed to this site?". That seems pretty harmful to the web - possibly even in an economically measurable way?

There is a lot of Tory bashing going on here but this policy runs deeper, Labour tried to put through similar legislation. The coalition dropped it but is back. Each Home Secretary seems to become more hard line and blinkered, like they are being poisoned by the fear emanating from the security services.

What practical steps can we take if this becomes law? If police and local councils are given access to browsing records, abuse is inevitable.

There are already well-documented examples of councils using terrorism legislation to spy on people 1)suspected of using the wrong type of rubbish bin [1] 2)sending their children to school outside of their catchment area. [2]

This type of abuse and overreach will happen frequently. Not to mention crooked police/council officials selling data, and others pursuing personal vendettas & checking up on current and former romantic partners.

The UK will become a horrible, paranoid place.

What can I do to protect myself? Use a VPN for all internet access? Use Tor (which seems too slow for most practical purposes)? What else can we do?

By their (lack of) logic, they should also have an officer following every citizen and logging where people go, so that they can know John left his house at 9:17 and checked in at local grocery shop at 9:28. With a warrant they could then obtain information that he has bought a large cucumber - let's arrest him, because he is probably cheating on the government with cucumber. He told the grocer, that how government fucks him is not making him satisfied, so he has to finish the job with a cucumber.

I read the article, but I'm no clearer on what the criteria for issuing a warrant is.

A few years ago it seemed like the answer was "because TERRORISTS", now they're also talking about organised crime and child abusers.

This government have already branded the leader of the opposition a 'treat to national security'. Which leads me to concluded that they are either lying, incompetent, or reading all his internet history too.

Furthermore, I've heard no compelling arguments as to why the idea of an independent judiciary (who should be the only people who can issue these warrants) is broken, or how it should not apply when it comes to the online world.

But the drip drip drip of obfuscated and fear motivated erosions to the balance of powers continues, and it's making me deeply worried about what kind of country my grandchildren will live in.

If we can't get privacy using crypto, we could always use chaffing to make their database useless. We just need a list of sensitive websites that want to hide their true users, and an ad-serving network that randomly serves up links to those sensitive websites on other web pages (but doesn't display them). In this way, everyone's browsing history will look suspicious, so the data won't be of any use.

>"Such data would consist of a basic domain address, and not a full browsing history of pages within that site or search terms entered."

Am I right in understanding they will have access to this data without a warrant? And then any 'further' data would then need a warrant.

>"For more intrusive surveillance - involving the detailed content of the communications - security services need to obtain a warrant."

So with more and more websites using https, where does this 'detailed content' come from? Is the Government expecting ISPs to collect data that doesn't exist? As far as I was aware, as long as you view a website in HTTPS, there was no way your ISP knew what individual pages you are visiting.

I don't see anywhere in the bill what EXACTLY an Internet Connection Record is, and since there is no such thing as a standard Internet Connection Record in any of our existing network infrastructure, I assume this has been left vague so that it can be extended to whatever they want.

Nor does it define the exact kind of Internet Service Provider that the law is suppose to be enforced against. (Is this only suppose to apply to those supplying bandwidth or do all websites/services count?).

> Law enforcement agencies would not be able to make a request for the purpose of determining for example whether someone had visited a mental health website, a medical website or even a news website.

This seems to imply that there must be a whitelist of domains for which ICR collection is required. But there is no mention of such a list nor how it would be curated.

Having the govt require ISPs to collect this data about us will result in ISPs "aggregating" the data and selling it to advertising / marketing firms, insurers or anyone willing to cough up a few for your private data.

To guard against terror. Terror coming from a certain group of people, we are pushed to choose between living without potential terrorists, and without the stasi, or with the potential terrorists, and with the stasi. Stasi and multiculturalism - both or neither.

Wild conspiracy theory - London is becoming the playground of world elites. So security is paramount. These bills are not to keep pedophiles at bay but to prevent some forms of "London spring" of the underclasses or other forms of physical harm towards your friendly neighborhood billionaire that could damage real estate prices. The conservatives goal is to make elites know they are safe here so they could switch to lower profile security details.

I have no better explanation why UK is pushing so hard on its own populace.

Also, in regards to data retention - I thought the CJEU made it clear that it's against the EU Charter of Fundamental Rights. Is UK seriously pretending that never happened? It seems their strategy is "we'll just use this new law for 2 years until it gets invalidated, and then we pass a new one that we can use for another 2 years". And so on and so forth.

U.S. companies, please stop establishing headquarters in the U.K. It's on an authoritarian path as much as Russia and Turkey is (certainly under David Cameron/Conservatives, at least).

> Aside from the fact that much of the code that was released was sub-par, the very act of putting code out into the world implied (whether intentionally, or not) a willingness to participate in a social contract with those who chose to use it for their own purposes.

Not to disparage the 100:10:1 method, but I do disapprove of this implication. I don't want people to refrain from putting things out there, just because they don't want to feel obligated to support them.

I have a public blog, where I try to maintain quality, and a semi-private blog where I try to allow myself to post stuff I'm not sure about, stuff that's not interesting to anyone other than me, stuff I don't necessarily want associated with my public persona, and so on. I know others do similar things.[1]

I think there's no particular mechanism to do that with open source, but it might be valuable. Perhaps I could have two github accounts, one where I'll put anything I feel like, and one where I put stuff I'm proud of and want to remain proud of. If something's on my "real" github, I'm saying that if you ask for support I'll try to help you, if you report a bug I'll try to fix it. If it's on, let's call it my shithub, there's no social contract. Take a look if you want, and feel free to report bugs, but I might just say "yeah, that's a bug". Or even just "not interested in that project any more, I'm not going to bother confirming this".

Then, if you want to follow this method and also want to make most of your code public, you can write down the 100, put the ten on your shithub, and publish one to your github.

Anything that I think "may" be useful to someone I put up on github. Sometimes to my surprise people use it. I remember putting something up (not even one star/issue or anything ever), then 1 year later getting a random email saying "this is great, thanks." Yeah, no problem, anytime.

I went the opposite direction (kinda) from the author. I realized that people assuming I would fix what I put up didn't matter to me. Sometimes I would make fixes, or check issues, sometimes I wouldn't. Some people got mad, some understood, some were drive by so they never checked back in anyway. Whatever, it's up there, if it really bothers them enough, they can fix it, or use it. It's up to them, not me.

However, I'm still a little like the author. I have a bunch of projects that are "good enough" for my use, but definitely not for someone else (for example, I wrote a password safe[1] cli program because I needed something that would work on FreeBSD. It only handles the features I use and may croak on a file that uses all field types). I keep these off my github.

I agree that any OSS developer should recognize their toy projects will likely not go anywhere and keep a few projects in mind to work on. Keeps things in perspective, keeps one interested, and so on like author said. Doing 10 projects in parallel makes less sense to me. One can accomplish the same thing sequentially while focusing on only a few projects. Just bounce back and forth.

If I went with a rule, I think Google's 80/15/5 rule is worth further exploration. Many of the best projects start with a clear need to solve a problem. Picking one of them and making it happen should be 80% as others will appreciate the problem being solved. Especially true if author understands that domain. A significant step out of the domain or re-application of an effective concept to new domain might be a 15%. A 5% project might be a wild idea with potential but high risk and/or way outside author's domain expertise. The main energy will go into 1-2 80% projects while the 15% and 5% projects are a mental break from that.

This model seems more productive while still letting a person screw around on side projects that might be fun, good learning experiences, or time-wasters with a limit on the waste.

There's a tangential problem I've been thinking about a lot recently: the long tail of open source projects on GitHub.

There are 20M+ repos on there. Which leads to a serious discoverability + signal/noise problem around 'true' open source projects. As someone who wants to put up an open source project for collaboration (with willingness to maintain), it's hard to get the right eyes on it. Too often, it seems like no one cares. Conversely, as someone who wants to contribute to open source projects, it's a little hard to find small early ones where I feel like I can make a meaningful contribution (yes, the big ones maintained by well known companies are easy to find, but I much rather work on something put up by an individual programmer).

How do you cut through the noise on GitHub to find projects you actually want to work with (and want you too)?

If I would reply to all the emails, pull requests, issues, people send me about my OSS code, it could be impossible for me to write a single line of code every day. Eventually I believe the solution is to take what you get from the community, and to provide replies and feedbacks, by spending a fixed amount of time, and trying to sharpen the sensibility to focus on the most promising interactions, pull requests or whatever. It will sound gross to many that expect a reply that does not arrive, but it's better than to stop doing what you love which is writing OSS.

I really like this idea. I don't actually have the problem that people are bashing my door down to have me support old code, but my github account has accumulated a huge amount of cruft. Much of it, while quite beneficial for me for playing with something, is completely useless for others. I tend to use Github as a backup system... I'd really like my Github account to be useful for others and not swimming with half finished ideas. I think it's time to clean house...

Strangely this sounds like it might work. I've been thinking about a lot of different projects, and end up doing absolutely nothing. There's also a lot of small stuff that, if I did an MVP, would actually be close enough to ship and be useful. Though I'm not sure about the projects I'm most interested in; they are fairly big and getting to MVP could be many months (so doing even 3 would take a couple of years).

Just publish and put some clear indicator right there in the README of the status/purpose/scope of the thing. I have bunch of them marked 'experimental', more rarely something gets all the way to 'minimally useful' or 'used in production'.Arbitrary example: https://github.com/jonnor/agree

It is not for me to decide whether something will be useful for someone else or not, that is up to them. Just give people enough facts that they can make an informed decision.

I find it a bit strange to write down 100 ideas of what I could develop. I think it would be much better to just develop some missing software or make a better version of something existing. But just collecting ideas without a real, deep interest in it sounds a lot like the beginning of abandoned projects.

I've written probably a hundred open source projects in my short professional career. But only two of them do I actually use every single day: a simple OS X window manager [1], and a hybrid command-line/native-GUI fuzzy string matcher [2] (you'd be surprised how many ways/places this can be used). The rest of them didn't serve well as practice like I'd hoped, they were mostly just a waste of time I could have spent better elsewhere.

So lately I have new criteria for how and when to write open source projects: (1) when I realize that I have need to automate something, (2) and it won't take more than a weekend to get a basic functioning version up and running. Anything else, and I nope on out of there. I've got too little time and too many things to do already, I'm not about to waste it creating more unnecessary software. The world has enough of that already.

EDIT: adding links as requested

[1]: https://github.com/sdegutis/AppGrid note: I started to rewrite this in Swift locally, because Accessibility API sucks without generics, and the Objective-C version is a bit buggy)

Nicely done. Interesting how a website design will make people take a second and consider that's being presented. I remember going to the old website and immediately dashing out thinking this was something beyond me. The current design is tasteful, and well executed.

I have nothing against it per se, but it seems that there is a trend in which "serious stuff" is graphically designed as if it were meant to be used by only children. There are many examples. One good example is carbonmade [1] which is a website to showcase professional portfolios (quite "serious stuff" if you ask me). I genuinely wonder where that trend is coming from. Is it because the current generation of engineers grew up watching cartoons? (Just guessing) Or is it because it is a simple way to make difficult stuff appear more friendly?

I love the design. The little kid in the Where the Wild Things Are style gnu costume gives me a warm, Katamari Damacy type feeling. And that's what Guile needs, because it's so comfortable to use but there's no marketing suggesting that it would be better to use than, say, Perl (ugh) or Python (less ugh, but scoping rules=fail).

Though I do wish they had used the slogan "Guile goes with everything"...

I agree the site looks nice. One thing though, the cosplay guy in the illustrations on the page had me immediately thinking of Beastie which I found surprising seeing as GNU has a bit of a different ideology from BSD. GNU has long been using a gnu as their mascot so I guess the suit might be supposed to resemble a gnu calf.

Edit: Specifically I think it is the cover artwork of Kong's 2007 book Designing BSD Rootkits published by No Starch Press [1] combined with the FreeBSD logo [2] that caused me to think of Beastie when I visited the new Guile site.

I'd love for someone to explain what you get from using a secret management service other than encrypted at rest blobs.

Ex. You store your AWS Master key in a config file, and you have Microservice A that reads that key from the file. Microservice A is compromised (or its VM is compromised). How does having a secret store help you here? Couldn't the attacker just inspect the code of Microservice A and see that you are just reading from disk/reading from Vault?

In short, what do services like this protect from me (other than accidentally checking in my code to a public repo?)

I have a genuine question, why not use S3 alone for secret management?

One selling point of Confidant is using IAM roles to bootstrap authentication to the secret store. You can also do that with S3, put each secret into an individual text file and give each IAM role permission to access the secrets it needs. Set the S3 bucket to encrypt the data at rest, it uses KMS behind the scenes and automatically rotates encryption keys.

Rotation of the secrets themselves could be scripted or manual, that part would be basically the same process as using Confidant or any other tool. And I believe S3 access can even be auditable with CloudWatch logs.

Also, S3 now offers either eventually consistent or read-after-write consistency. EDIT: actually, it looks like new object PUTS can be read-after-write consistent but updates are not. So this could be a downside, if you rotate a key getting the new one is eventually consistent. In practice this might not be a big deal though, there's already going to be a gap between when you activate the new key and when your app gets reconfigured to start using the new key.

I'm very curious what the downsides might be of doing this. For all the various secret management tools that have been released in the past year or two, I'm kind of surprised I've never heard anyone talk about using raw S3.

Please correct me if I am wrong, but I think there is no secure way to store stuff in an virtual environment.

I wish I am wrong - cause my heart always bleeds if I see db passwords in configuration files! But As long as there is a hypervisor you do not control access to - you must trust the owner of the bare metal to (1) honor your privacy (2) be competent to secure his system. Trust is nice, but it is not security.

granted - Confidant and KMS seem better solution than most. Will look into it at more detail. thx for open sourcing it and moving the solution forward.

Nice way for Lyft to fire back after that iOS reverse-engineering video [1] revealed that they were showing off one of their keys in a production client. I don't know if this was intentional, and I believe whatever exploit they had was mild, but it restores (at first glance) my faith in them a bit :).