Is it a matter of features, speed, or convenience? Obviously, all of those can be overcome, be it as a fork of the official emulator or as a third-party emulator. For instance, this new Chrome extension must be the same thing under the hood: a Dalvik runtime, possibly an ARM->Intel recompiler for any NDK applications, etc.

I figured the only reason this wasn't done to mass effect already was because it wasn't in demand. But if it's so desirable, surely creating an actual emulator would be superior to hacking up web browser extensions and ostensibly playing cat-and-mouse with Google over this?

I hope Google gets us something official sooner rather than later. It's a little disheartening that I own a Chromebook Pixel and yet I can't use Google's own hardware to design or test Android apps without installing Eclipse on a sideloaded Linux chroot via Crouton.

I hope Google could really carry this project as far as possible. The next several major issues would be polishing up the platform, eliminating the bugs, unifying the android and chromebook development interface. Think of one day when android developers could actually design apps for the desktop. How cool would that be?

So, now we can write apps in Angular that run on the web and compile to Java so that we can install them to Android, running on ChromeOS, running on OSX.

Brilliant.

Edit: Perhaps the punny nature of this is deserving of downvotes, but the statement above is the actual use case I presented to a co-developer, discussing how this project could be of use to our app, which was built with Ionic.

FWIW, there's value in it (the app, not necessarily this post) even if it means having to unplug fewer devices to swap them out with different devices to test.

> coca-leaf which comes from South America and is processed in a unique US government authorized factory in New Jersey to remove its addictive stimulant cocaine

According to Wikipedia "The Stepan Company is the only manufacturing plant authorized by the Federal Government to import and process the coca plant, which it obtains mainly from Peru and, to a lesser extent, Bolivia. Besides producing the coca flavoring agent for Coca-Cola, the Stepan Company extracts cocaine from the coca leaves, which it sells to Mallinckrodt, a St. Louis, Missouri, pharmaceutical manufacturer that is the only company in the United States licensed to purify cocaine for medicinal use."

The article waxes so eloquently about this beloved product that I would have mistaken it for a paid PR piece. The article is great read nonetheless.

For those who are also interested in the other darker, grimier side of the same coin, might want to check out its use of mercenaries for union busting in South America(by murder of course. In the hands of the right spinners that would be 'terrorism'), similar stuff happened in India as well.

I wish Coca Cola would make a acid free version of coke. The Phosphoric Acid adds a slight tang to the drink, but in exchange absolutely destroys your teeth over years of consumption.

For regular drinkers like myself I'd happily pay a small premium to buy the "acid free" version of the drink. The sugar still does damage but with both the acid AND sugar it is like a double whammy of "badness" (acid which destroys your teeth's natural protective coating, and sugar to feed the bacteria which actually eat away at your teeth).

No amount of brushing can really undo the amount of damage acidic soda does to your teeth, trust me I know! Even with prescription toothpaste with fluoride 5x times stronger than normal (5000 ppm toothpaste Vs. 1100 ppm) you're only slowing down the progression.

> The top of the can is then added. This is carefully engineered: it is made from aluminum, but it has to be thicker and stronger to withstand the pressure of the carbon dioxide gas, and so it uses an alloy with more magnesium than the rest of the can.

Nope, the pressure from the carbon dioxide pushes equally against all sides of the can. If anything the pressure at the top is slightly lower than at the bottom, at least if the can is standing, because of the weight of the coke pushing against the bottom.

To keep you drinking they add plenty of sodium (50mg+) masked with sugar, HFCS, or sweeteners. They also add caffeine as a diuretic to keep consumers drinking, too. And then they market it to children, lovely people.

Check out Dr. Robert Lustig videos. Also, the book Salt, Sugar, Fat, about food industry engineering.

> Modern tool chains are so long and complex that they bind us into one people and one planet.

When we think about colonizing the Moon or Mars with small groups of people with the intention of making the colonies self-sustaining over time, deep, long-evolved tool chains like the one described in the article could be very difficult to scale down and to replicate in other environments.

> The number of individuals who know how to make a can of Coke is zero.

This reminds me a fact I remember time to time. If civilization collapses after, say, a world war, I most probably can't make a pot, can't grow plants, can't differentiate if one is edible or not, can't dig for petrol, can't make plastic (or even glass), can't reinvent concrete, can't make gunpowder etc., you get the point.

I can only write software and maybe drill with tools and nail with a hammer but that's all.

"The top of the can is then added. This is carefully engineered: it is made from aluminum, but it has to be thicker and stronger to withstand the pressure of the carbon dioxide gas, and so it uses an alloy with more magnesium than the rest of the can"

Yes, but the pressure is the same on all parts of the can. Ok, almost the same, still.

Maybe because of the parts that have been cut to make it easy to open?

Speaking as somebody who's never even smoked a cigarette or a joint: are there people who tried to recreate the "original" coke recipe? The one with "unprocessed" coca leaves? Is it available on say the latest instance of Silk Road? What is it like?

You could say this about any product. I think the essay would be considerably longer if it concerned a typical PC or phone, not to mention a car.

I also think the essay can be written with cynicism instead of wonder, e.g. with an anti-capitalist slant. With one innocuous affordable purchase you can deforest and pollute four continents whilst giving yourself diabetes and dental caries!!!

I don't think the attitude of being allowed to be hand-wavey is a good one for this sort of thing. Big O notation has very precise semantics, and it is very easily misused.

Many of the examples are wrong. A python "print" statement is (for the built-in types at least) O(n), so printing the first item from a list must be at least O(n).

Fundamentally, looking at big O notation as a way of measuring runtime is misguided. It is a way of describing a class of functions. But there is typically a level of indirection between the function that is being talked about, and the one which the analysis is applied to. For the "return the first item in the list" example, the function which is being analysed is a function from Int to Real. That function iterates through all values with size corresponding to the input to function, for each of these it runs the "return the first item in the list" function and measures the time it takes. The maximum time taken is then the Real output.

That intermediate function is important, and needs to be specified properly. If it isn't, then you get confusion The section "N could be the actual input, or the size of the input" is an example of this confusion. Here, the intermediate function which is being analysed is much simpler (but it is still present).

I should probably also add, that I am not by any means an expert here, and I wouldn't be surprised if I've made sloppy mistakes here. I'll just make the pre-claim that this only further proves my point that being hand-wavey is not a good thing :-)

An O claim can always be pessimistic. A linear time algorithm in N is not only O(N) but also O(N*N) an O(2^N): it is at least as good as all of these. If we are sure that the algorithm is faster or slower than N, then can say Theta(N), the tight bound.

This was doomed from the start. By trying to make it more realistic they exposed the limits of technology.

A key problem is that the weight of a blade is a huge part of swordfighting. A light bladeless controller can't capture the the experience. Especially since there's no feedback from hitting another blade.

On the other hand they might be able to recoup their investment.

In the leadup to Star Wars 7 there's a huge potential market for a lightsaber duel game. There's still enough time to develop a tie in game, and I can see the tech being bought for over 500k.

To address the last line in the story - "As one would expect, many backers are questioning exactly what all of that half a million dollars was spent on - something Stephenson hasn't addressed publicly in the two years since the project was funded."

Not hard to imagine. $500k less taxes is pretty much enough to pay for two years of two full time developers and a poorly paid 3d artist.

So now the whole game is gone? The work that has been done will be flushed down the drain?

Seems like it would be relevant to release the whole shebang as open source with potential commercial license incentives so that anyone truly interested in the community can work on it, and another studio can even pick it up where it was left off while benefiting the original devs as well.

It seems like a risky move to announce that, oh btw, we gave $700 in refunds to some random people from the forum. Now the comment thread is full of backers saying "if THEY got a refund, then I want one as well!". That could get quite ugly.

Ive hit my 30s, a period when it seems as if all of my friends suddenly have kids. Thats a priority shift completely incompatible with my goals. Startups require that you give it all or go home, routinely requiring long nights, longer weekends, and blood and toil. If you arent willing to put in the hours, eager replacements are standing behind you. If I fail, the women I work with will be out of their jobs.

It's this fearful attitude, lurking in the minds of bosses and employees, that is the problem facing women in the workplace who want to have children, more than anything else. (For example, I put it at the root of poor leave policies.) It's called sexism when it comes from a man, but here (from a female boss) it's clear it's just culture (American culture?).

I just had my first kid, and my wife had to go back to work at six weeks. I'm a software engineer, and she's a medical device rep in trauma. Unlike me, she can't work from home, she carries a pager, and she can't choose her work hours or reduce them. She wasn't itching to go back to work either; she loved being at home with the new baby. However, you do what you have to do. Some new moms do quit their jobs, especially if they weren't making much more than they'd save on childcare by staying at home, or if it was a crappy work environment or an unfulfilling role anyway. However, for many, it's not an option not to work, and being a software developer is actually a pretty cushy gig that I would wish on moms everywhere.

If you're afraid for yourself or someone else of having kids, go out and talk to some power moms.

I work as an engineer for an NYC startup and have 3 kids. No, its not easy, but yes you need to reset your priorities. Life becomes more focused on fewer activities. Once the kids get a bit bigger its not as time intensive.

I work roughly 6:30am - 8:30am and then 10am - 5pm M-F.

I have many other friends who are engineers at fast moving companies with 2,3,4 or more kids. Its definitely doable.

If your company is asking you to work hours and hours maybe there is something wrong with their product development process or business plan.

And I really liked this article. As an entrepreneur who has structured my life around my family (i.e. work from home, flexible hours), I can empathize with Brianna and Amanda's points of views. The entrepreneur in me is obsessed with development and deadlines and shipping. The father in me is obsessed with spending time with my daughter. There are times when both are at odds, and while I like to say I always make the right decision, I don't. It's a tough struggle. And it's a struggle I am very conscious of, because I have competitors who don't have or want to deal with similar constraints.

But honestly, I often think these constraints make me a better entrepreneur than I used to be, because I am forced to be strict about my priorities and time. If something is a waste of time, I don't give it a second glance and move on to something else (HN notwithstanding, ahem).

I'm glad I read this despite the link title, which is appropriately based on the article's sub-title (The title, "Choose Your Character", is even less descriptive). The article hits on some of the startup and indie gamedev work-life balance issues that affect everyone and some unique to women.

A bit OT, but I think it's refreshing to have a character like her in the tech seen, vocal and taking the spotlight in a lot of places.

At first I was thrown off by the very douchy looking attitude, it felt too much like overcompensating. And I'd hate to work in her company for so many reasons, the burning startup mindset being the main one.

But this article, as her Debug interview or the Isometricshow podcast also show other facets that are pretty fair, balanced and well thought. The podcast particulary brings hilarious and soul crushing moments alternatively, I'd recommand to anyone wanting to hear something a bit different.

It's silly how much goes into correctness in games nowadays. You'd have to make an Asteroids clone just not to offend anybody (except sentient asteroids...).

Just make your game fun, challenging or whatever your goal is and have fun making it. And of course you can put in interesting looking characters, it's called art :)

Also, to the politics topic: Oh I hate that so much, it only takes one person to mess up whole teams and the worst thing is if it's one of your superiors. It's horrible when you can't do anything but change your job (been there, done that).

edit: to the downvoters, please read the whole thing and my response down there, if you still disagree, no hard feelings :)

edit 2: From the article, one of the points I was referring to

"Why are they all white? sneered a liberal friend of mine before launching into a 20-minute screed about how offended he was by the naked shower scene in Heavy Rain."

It's completely hilarious that this article would bring the anti-PC haters out of their cave. There is absolutely nothing in this article about PC, it breaks the script in numerous ways:

- referring to her employees as "girls" instead of women

- her conflicts about her employee's pregnancy

- fretting over the attention to female-image issues in games, wondering if "the only way to win this game is not to have women at all"

I guess as long as a tech writer dares to use the female first person, HN will be deluged with comments from "gahh I HATE politics" know-nothings plus their more anti-social brethren. It's even curious there would be such focus on the boss being childless, this is so not the point of the article. I would probably criticize her cheezy i'm-so-rad-on-my-red-motorbike aesthetic before even thinking about gender stuff.

If there's a bright side to all the defensiveness, it suggests that the recent focus on gender is working. Much like the Anita Hill hearings brought out all sorts of ugliness out on the way to sensible anti-harassment policies, we're witnessing the next evolution.

Notice something not on the list of best practices: documentation. While Node itself has excellent docs (for the most part), the Node community is terrible about documentation. If you're lucky you'll get a single README.md file with the basics covered.

Pick any category of module and there's a good chance the most popular modules have little documentation; certainly nothing close to comprehensive.

Coding style:In most cases if you write you code properly, you don't need to nest more than 3-4 levels. If it gets deeper split it out into separate functions. Otherwise, it's a perfect job for async.series.

I'm coding a fairly large application in Node and have never heard of EventEmitter or Streams. Does that mean I'm doing something wrong? The impression I get from the article is that it's such a fundamental patterns that every serious application should use it.

When they say 'avoiding closures', how does that relate to functions in your module? Your module is often exported as a function, so are they suggesting that every function be exported? I suspect I'm not understanding the logic behind 'stack based'. Why is it better to have your functions not contain other functions (or is it not be contained?)

"We imagine they siphoned off any remaining cash to investors, so they can declare bankruptcy and not pay the court-ordered legal fees." Is that a crime? If it isn't, it seems like it should be. The creditor already has a legal claim to the funds at that point. That's at least economically equivalent to stealing their money.

I think it's interesting that performance is mentioned as a primary feature of this. I haven't deployed a web service in nearly 15 years that had the HTTP server as the limiting factor for the performance of a website. Serving small files from RAM really fast just isn't that interesting. For this kind of workload Apache is fast enough for the vast majority of deployments (such a large majority that I've never worked on a deployment that it wasn't, and I've worked on some very large deployments), nginx is fast enough, and H2O is fast enough. Going from "fast enough" to "2x fast enough" isn't going to matter to end users who are waiting on other parts of the system.

The database is still gonna be a bottleneck. The application is still gonna be a bottleneck. The disks are still gonna be a bottleneck. The network is still gonna be a bottleneck (Apache can saturate a Gbit NIC, so can nginx, so can H2O).

I'm not saying this is useless. It looks like a cool project, built by someone really clever (unco is hilariously clever), with lots of good uses (an easily embedded HTTP server library is nothing to sneeze at). I'm saying I think it's weird and unfortunate that so many people focus on performance of the web server, as though it will make a difference for end users. In the vast majority of web server deployments any of the major web servers will do the job and will perform well enough to not be the bottleneck in the system.

Looks impressive though benchmarking with "ab" should be taken with a grain of salt.

nginx does a lot of optimization at many level that ab can't figure out.

In short, it only profiles the speed of the HTTP parser and certainly not the network stack.

There are a lot of things that a HTTP server does in order to keep the connections stack "steady" : Disabling the Nagle algorithm at the right moment, gracefully handling failure, managing slow clients the right way, ...

Very nice, I've often wondered at the lack of embeddable http servers in the c/c++ world. Are there any other libraries that do the same thing? What is the status of this project? Is it being used in production anywhere?

Does this server/library allow the same kind of hooks and configuration that a traditional web server allows?

Or is that the point? If you don't need all the configuration and features of an off-the-shelf web server, you can more easily custom-build an H20 HTTP server for your specific needs that is blazing fast?

As happy as I am so see open source libraries that can support SPDY/HTTP2, these "2x faster than nginx" stats are a joke. This dubious claim is based on how fast H20 and nginx can serve a 6 byte, and a 4KB response

I'm a little confused - why is ASM still an issue these days? Sure I can understand some in-line ASM for hardcore speed-critical code but beyond that...why bother? Even interpreted langs seem fast enough these days, so compiled should def be fast enough and resorting to ASM should imo be unnecessary.

NB the above is a personal view & I'm not a programmer by profession...so if I missed something - no offence intended.

The question is what standard of living do we want for species in the long term.

We can always live more comfortably today by consuming non-renewable resources that make our world sustainably enjoyable, but at the loss of the benefit that resource would later give. Slash-and-burn farming does this. As do putting up a mall over untouched land, burning fossil fuels, and overpopulation, for example, all of which do the opposite of setting aside part of the planet.

Business people know the concept better than anyone. They know a company is in trouble if it sells an asset whose operation produces profit to pay for current operations.

We can set off as much of the planet as we like and live in as much abundance per person as the planet can sustain indefinitely, though not as much abundance per person as we can today by consuming non-renewable resources. Using up those resources today only impoverishes future generations.

> the sixth mass extinction event, the only one caused not by some cataclysm but by a single speciesus.

Not so sure. IIRC there are reasons to believe that the big one, the Permian-Triassic extinction, was due to methanosarcina, an archaean genus. OK, that's not a species but a genus, but still.

It's a bit naive to think that all extinctions events happen because of some geologic or celestial event. Sometimes, evolution goes terribly wrong and sh.t hits the fan. Either it is by releasing nefarious gazes in the atmosphere, or creating a Primate intelligent enough to rule and consume most of biosphere.

I can't imagine us doing something like this for animals when we can't even do it for Ukrainians. From a political perspective, aggressive nations will always be seeking out annexations/territorial control and limiting the amount of land for human use would only encourage this. I mean, we're already discussing oil territorial disputes in multiple locales as well as upcoming "water wars" as unavoidable.

I don't think humanity is up to the task. This proposal sounds like something out of a sci-fi novel where everyone is a Marty or Mary Sue or some benevolant engineer dictator is running the show. In real life, guys like Putin don't give two shits about life and will march troops on a whim to obtain resources.

We should consider a single plastic toy, bought from Amazon, used for a few months and thrown away: The material and minerals are taken from the ground, factories to produce it, ship to the harbour, overseas, to stores, to the consumer, and then - disposal? For what? Animals are living creatures, that inspired (and still inspire - so many movies, stories, sport teams, logos, metaphors) humanity for ages. Many, many daily things we can really live without. Think shoedazzle. Do we really need new shoes monthly, or "get obsessed"[1] about shoes? Can we at least buy something with better quality that lasts for years? This shopping and comforts have a cruel irreversible price tag on animals and wildlife. Add to this wars and conflicts all around the world, and the results are devastating.

One of the discussions that spurred here in my cubicle was: How? How can we set aside half of the planet for anything other than ourselves? It's impossible! With almost every nation, state or person out there worrying about their piece of land it surely must be impossible.

But not quite.

Use Nuclear Leakage & Irradiation. Like the one that led to the Red Forest in Chernobyl [1].

'Radiological Reserves' are probably the only way to set aside a large area for animals/plants with a guarantee that humans will not come by. Not in the next 10,000 years!

I think the only ultimately sustainable solution is not to "set aside" any percentage of the planet for wildlife, but to develop ways of living that are not based on a differentiation between spaces of civilization and spaces of wilderness. Our species is naturally a node in a complex set of ecological systems, and instead of trying to detach ourselves from that system we should find a way to achieve our goals while living within it.

The world approaches population stabilization. Japan 's population is going to go down. So is China, Germany, Spain...

As we reduce illness in Africa and increase automation people need less children.

Population will get a peak and then not grow anymore.

If we solve fusion energy we will be able to plant vegetables or plankton underground, in floors, in a much more efficient way, as we will be able to have a stable temperature all day long, with pests controlled without using chemical products, just controlling physically the access, and very near the places they are consumed.

See also: http://www.americanprairie.org/ -- an effort to link public and private lands to create a 3 million acre preserve of the Great Plains ecosystem. That's pretty big, but not even close to the scale this article considers.

It is strange that modern agriculture employs crop rotation in order to increase yields but we do not do the same thing for harvesting food from the ocean. CBC's The Nature of Things recently had a series about the state of the oceans. During one of the episodes they showed the success of marine reserves in New Zealand. I am having trouble finding a good link but the turn around was amazing.

There are fundamental cultural issues that will need to be addressed for civilization to reach sustainable, large populations. Yet, even if we do undergo a mass extinction, it may be slow enough that we can actively intervene in the ecology to prevent the collapse of civilization. With the rise of synthetic biology, advanced genetic engineering, realistic ecological simulations, and perhaps AI-engineered organisms, it may be exiting. We could be on the cusp of an unprecedented explosion in new genes, phenotypes, biochemistry, and general biodiversity.

according to NOAA http://www.noaa.gov/ocean.html 71% of the earth is Ocean leaving only 29% land. So the goal is for 14.5% of the earth to be set aside for Wildlife? That is a terrible title for an article. 14.5% != 50%

Our society is very irresponsible. Every holiday is a nightmare for the planet. The tons of junk, wrapping, and throwaway stuff we consume will be ridiculed from future generations. Not to mention the time and energy (literary, too) wasted for shopping. I stopped buying birthday decorations and try to educate my kids to stop having these merchant-inspired "festivities". All junk from Easter, Thanksgiving, Christmas, and the endless kids' birthdays piles up to a ton per year. Be responsible as we're leaving a huge liability to the future generations and our children and grandchildren, which we care the most about! I'm really disappointed at the Waste Management Recycling Centers who refuse to take anything, but CRV just recently. I invest time and pile up tons of non-CRV recyclables and they do not take it anymore.

Not sure why I bother trying to talk to you people anymore, but I will go ahead and throw out an idea that I assume you will simply reject because it goes against your belief system.

We should not worship nature to such a high degree. Yes, we should try to conserve wild areas as a buffer against mistakes and for basic enjoyment. And we are not doing a good enough job of that.

But the assumption is that basically the wild areas have some sacred process or system going on that we cannot possibly ever aspire to understanding or surpassing.

First of all, there is absolutely no separation between the "wild" world and the "human" world. The idea of a natural world that is separate from a human world is an oversimplification that has become misleading.

Everything in the world, including people and the things that we make, from human feces, to plastic trash bags to rocket ships and computers, is the result of the same natural physical laws and processes involved in the universe.

The planet sees itself with billions of eyes. The planet thinks with billions of tiny minds.

The cities, roadways, and agricultural fields that cover increasingly large areas of earth are part of the natural evolution of the planet.

Its hard to really convey especially since we are so far down the line of nature worship, but part of what I am trying to get across is that humans have already surpassed nature in some ways, and if we haven't already done so then we can create environments that do.

I think it will be easier to appreciate this type of thing once we become a multi-planet species. Or at least get a colony on the moon or something.

Because part of the nature worship is the reality that we only have one biosphere to support us. We need to fix that.

But another thing -- this does tie into Malthusian population control, eugenics, classism, etc. There is an inherent disgust for the dirty masses that is hidden behind the earth worship. We have to remember the value of human life.

This will only become possible when a couple for important milestones are reached --and they a biggies. First we have to get rid of a lot of the roads. And that can't happen until personal flight becomes common place. Second human population has to stop growing, in fact it needs to shrink even now. We are already reaching upper limits on agricultural and water availability. Unfortunately, while the former is difficult enough, the later is near impossible due to the dominance of infantile religions.

During ILC 2014 in Montral, someone presented the emotional lisp joke. It was a lisp where ( was replaced with (-: and ) was replaced with :-). At a second iteration of the joke, these were replaced with emoji.

These new terms of use seem more realistic in one area. The prior terms stated:

"Project Creators are required to fulfill all rewards of their successful fundraising campaigns or refund any Backer whose reward they do not or cannot fulfill."

The new terms state:

"If the problems are severe enough that the creator can't fulfill their project, creators need to find a resolution. Steps should include offering refunds, detailing exactly how funds were used, and other actions to satisfy backers."

In my book, this is more in keeping with the reality of a project that (honestly) failed. If the money's gone and things just didn't work out for whatever reason, it's just not realistic to expect all funds to be returned. If the person or persons doing the project had this kind of financial reserve, why would they even use Kickstarter, other than as a marketing tool?

I'm not sure how much difference the new terms will make in practice but they do soften a black and white rule that, frankly, shaded Kickstarters more toward pre-orders than speculative project backing.

Edit: As noted elsewhere, the new terms also explicitly further remove Kickstarter from imposing any specific requirements if projects fail--and therefore any implications of a role in enforcing said requirements.

Instead of just trying to remove themselves from the process for legal reasons, surely they're at the size where they could entertain providing project managers to help coach entrepreneurs through their venture? It's not like YC steps back and abandons each batch to see who survive, they coach them through to maximise success.

If a Kickstarted project is tackling manufacturing of a physical product for the first time, they risk making the same mistakes as others before them. YC in a case like that would put entrepreneurs in touch with advisers or others who've gone before them who learnt from their experience.

Or is this something they already do? Kickstarter hasn't been opened up to Australian projects until recently, so I'm not sure.

I created the Gmail application running on J2ME, which we shipped in 2006 [1].

When we shipped, this application ran on about 300 different devices which, for those of you who ever wrote J2ME code, is pretty impressive. J2ME development was an absolute nightmare. There was no debugging on device, pretty much no tooling, even doing a println to find out what was going on in your code was not supported (I had to write entire libraries that would display debug messages in the window titles).

The default J2ME widgets were absolutely horrendous (and they all looked different on various phones) so we had to write our own widget library. This came at a price but it was invaluable in being able to make the app work on hundreds of devices.

Looking back, I honestly don't know how we did it, but the team was absolutely brilliant.

Shortly thereafter, I joined Android where I was asked to create the Gmail application and help build up the operating system. With my experience on J2ME, I knew exactly what I wanted Android to have:

- Seamless on device debugging

- IDE support

- Powerful view system

- Java API's that look familiar to Java developers

In short, everything we never had on J2ME.

Needless to say, my subsequent work on Android was infinitely more pleasant than the year I spent on J2ME, which I don't miss one bit :-)

I worked on some J2ME applications around 2004. While we built usable apps, it seems if you want any kind of user interface that doesn't look like crap, you need to implement it yourself. The default text fields and labels did not look nice.

Another problem was that each carrier pretty much determined what you could do, such as using GPS or network access. So one some phones it works, others it doesn't. Of course, that's assuming you can get it onto a customers phone in the first place. There were app stores, but they sucked. We used Nextel's "store", and it was not fun.

I hope the situation is better today for J2ME, since when the iPhone and Android came out, it showed the carriers that people actually do want to use their phone's full capabilities. And have an easy way to get apps onto the phone.

J2ME programming is actually kinda fun, because the concepts make sense, and there aren't many of them. Canvas/Screen is pretty straightforward. Android's Context/Activity/Fragment/Intent/Bundle/etc/etc are a cognitive nightmare by comparison.

Unfortunately JRE 1.3 is almost 15 years old, and it's just annoying to e.g. not even be able to use java.util.List. And MIDP just isn't very rich in support for ... anything.

I managed the tech/product aspect of Opera Mini and its direction for most of its first decade (we started in late 2004). There's 250+ million monthly active users of it now (most in places like SEA, Russia, India, Africa, South America). Reading stuff like this is quite rewarding.

This post somehow made me think of the first English language blog post from early 2006 that really appreciated what we (me and my team) accomplished:

Have you seen the cheap Android phones out there? The Android One phones are about 100 bucks. The Moto G is $180. Still not $50, I know, but it's getting close enough the really start squeezing those phones (and JSME) right out of their niche. And Android development is Java (for all intents and purposes) too.

ex j2me dev here: I could not go back... it was great developing on a limited platform, (boundries spur creativity).But the bugs were show stoppingly terrible l, also completely unnessary platform differences, I would never go back.

Another huge challenge with J2ME application development is the marketplace options. Most of these are carrier controlled, and you must pass certification tests per carrier, per device in order to make your application available. While some devices/carriers will allow easy J2ME 'side-loading' (never called that in the J2ME world), almost no one does it.

Additionally, most developers are aware that mobile app purchases most often happen immediately after the purchase of a device, and lots of users never buy another app again. This is even more stark a scenario in the J2ME world (particularly in the US - though we make up a small amount of the J2ME devices in use).

What was most frustrating to me about J2ME was how few device manufacturers/service providers were willing to open up to third-party app developers. In the U.S., I found these phones to be about as open as video game consoles: You have to get the carrier's blessing before you can even start development on your app.

In the military I learned just how much the cohesion of a group contributes to its success. Two major factors of unit cohesion that I observed were pride in the mission, and emotional bonds within the group. The two combined are almost unstoppable. This is a little uncanny as I stopped from booking a flight to my 10 year deployment reunion just to share this thought with you.

Also, I was in Baghdad during an Ashura. It was a vivid sight to behold. The whole neighborhood was definitely alive with a kind of electricity.

I've played sports my entire life (father was a basketball coach), and in that world these sorts of "rites of passage" are common place. I've also seen a bit of this at software companies -- for instance at Facebook I got the feeling that a lot of engineers felt simpatico as a result of the shared "suffering" that went into the tedious interview process.

My experiences led me to believe that, while this can be effective, it only works on certain classes of people. That is, I think a person needs a certain amount of navet to be drawn in by schemes like this. They need to already be willing to buy into some litany of conquest and glory. To be young and/or dumb.

Perhaps it is an innate human tendency to form stronger bonds under these conditions; but it's certainly not expressed equally throughout all phases of life, circumstances, and the population at large. I'd be very wary of attempting to exploit it by constructing rituals for a software team.

If someone is willing to pass an extreme rite of passage (like trial by fire) to be accepted, isn't (s)he already primed to form a stronger social bond, compared to someone who would only be willing to pass through a simple, non-straining ritual (like hand-shaking)? I suspect that most of the "hand-shaking" crowd would simply shun an extreme ritual, instead of being transformed by it.

I don't know about "extreme rituals" but I did recently experience the power of bonds forged in the (metaphorical) trenches. i.e. Among those in the trenches a "Us vs Them" mentality takes hold. Us being the ones in the trenches and "Them" being everyone else. So when a peer is in trouble then people will do whatever it takes to help (since he/she is one of "us") but when a general shows up the nobody is willing to go beyond the bare minimum (since the general was not in the trenches and is not "one of us"). Might not be ideal in terms of the overall picture but in terms of group dynamics I've never seen anything even vaguely as powerfully as that "Us vs Them" loyalty.

I wonder what sort of implications this has for interview techniques. Could the grueling day (or multi-day!) long interviews that tech companies are famous for actually be a way to make those who pass value their job more?

I wonder if snapshots on Linux inotify() event detection (or similar) would be easier. At the very least, skipping making snapshots that are exactly the same as previous versions would be a nice feature. I implemented an inotify() based angband cheat as perl script, once... no ZFS required! Does the average home directory change more than once per 5 minutes on average? I doubt it.

I'd really like to see the full ISA, etc for the chip. A few years ago, I was doing research on building a byte code based vm after working through Peter Michaux "Scheme From Scratch" http://peter.michaux.ca/articles/scheme-from-scratch-introdu.... (I highly recommend running through his code, but do the GC earlier, it's easier to get it from the start than to try to add it.) I couldn't find anything online listing the kinds of instructions you'd want in a lisp chip.

Talking about Lisp machines. I don't know much about these things but I was thinking about them recently when Hewlett Packard announced its so-called Machine^1. They want to build a new kind of OS for it, but wouldn't a Lisp machine just do?

"But the assumption that digits in a big power of two occur at random"

That may not be the best of assumptions, but it doesn't hurt his argument. Benson's law learns us that the initial digits of 2^n tend to be low (empirically, but in this case, also mathematically. See http://www.johnderbyshire.com/Opinions/Diaries/Puzzles/2004-...). Less than 8% of powers of two begins with the digit '5'. Also, fewer powers of five begin with an even digit than with an odd digit.

The argument that there is no deep mathematical reason why it has to be true, though, I find more belief than math. We don't know such a reason, but that doesn't mean it doesn't exist.

Much of the colour scheme and the grey/blue fading ramp behaviour appears to have been lifted from a similar tool I wrote, vtmc[1]. While it was nice to be credited in the "CREDITS" file, it would have been nicer to make it into README -- nicer still for the code not to have been taken from MIT to GPL!

But I suppose, as they say: imitation is the sincerest form of flattery.

At the recent PyOhio conference I saw a neat trick, with PostgreSQL event triggers advanced the web hosted slides. I think it was this one http://pyvideo.org/video/2842/pushy-postgres-and-python ... anyway, the presenter was able to just simply stay in his shell, and participants could watch the slides on their laptops, and he'd occasionally refer to them on the large screen as well.

It's an admirable attempt at telling a story and I appreciate the journalism. Perhaps I'm ADD, but I just don't have the attention span to get engrossed in this kind of format and follow every word through to the end.

Man, there are a lot of diagnoses getting thrown around this thread. As a caregiver to someone with a serious illness, as well as someone who periodically suffers from many of the same mental and emotional issues raised here... How about refraining from doing that unless you are A) a mental health or otherwise trained medical professional; and B) someone who has actually seen and assessed the patient. I'm not calling out anyone in particular because let's face it, this is HN and we're probably all know-it-alls at one time or another, but this can have some particularly pronounced thoughts and effects on the posters who are getting the comments.

If you are dealing with any of these issues, my heart goes out to you. Please reach out to a counselor, or at the very least a counselor or therapist who specializes in the things you're dealing with. If you need help finding one, my email is in my profile, i'm glad to help.

I'm also recovering from a depression which lasted for quite a while. It absolutely sucks because you think you're worthless, nobody loves you, you can't get anything right and the best would be if you just wouldn't exist anymore.

And on top of that you isolate yourself. I know how hard it was to ask for help therefore I want to show you some things which helped me:

- Realize that your depression is lying to you. It doesn't tell the truth. It makes you believe that something is logical even if it isn't.

- Read 'Feeling Good' - terrible title, great book. It will probably work better than average on the average HN reader because it takes a 'rational' approach to depression (cognitive-behavioral therapy). It helps you to recognize destructive thought patterns and how to deal with them.

- Garbage in, garbage out. What works for computers also works for your body. Yeah, you're a geek but you can eat some vegs instead of the 500th pizza. Also working out (or other sports) are pretty great.

- Long term: Therapy which tries to work on the root cause and not just at symptoms.

Finally, here's a rather extensive list with lectures, books, exercises, etc. which help dealing with depression [1]. Back when I was fed up with feeling crap I created a spreadsheet with the 8 activities and tracked those every day.

Note: Every person seem to react to differently. I read about people who improved a lot by meditating - on the other hand, it didn't work for me.

So, try some things out and don't give up. You can beat that liar in your head.

I guess I'll be the only person to comment on the actual Moz business struggles rather than the depression side of this post. Moz raised their money at a really tricky time because it was right before Google essentially bent over the SEO industry. When Rand mentions the Content tool that hasn't even started being developed, that was something that was supposed to take your Google Analytics keyword referrer data and match it to your content and your rankings and your links and your competitors and basically help you spot keywords and content you can easily rank better for.

The timeline seems to be matching up where they had this plan for this tool before any of the Google SSL stuff started, so as they started working on the design and UX of it, Google started rolling out the SSL stuff and it basically ruined their idea. Moz ended up adding tools to try and guess what keywords made up your "(not provided)" data but that's a far cry from what they were originally planning.

I'm basing this entirely on being heavily involved in the SEO industry around the times mentioned in Rand's article and having even run a successful SEO SaaS product (which is still going even though I've moved on to other projects). I just remember seeing screenshots of what they wanted to build and thinking "wow, if they can nail this, it will be great". I wanted to build a similar app. But when Google started hiding all organic keyword data in analytics, I distinctly remember saying "Well there goes Moz's whole new product".

Google really fucked the SEO world up with their (not provided) move. Think what you will about SEO but it's still a legitimate marketing channel and I really have never been able to understand why Google thinks it's ok to not share your organic keyword data but your paid keyword data is totally fine to share with site owners.

>> ...layoffs is a Pandoras Box-type word at a startup. Dont use it unless youre really being transparent (and not just fearful and overly panicked as I was).

I made a similar mistake once as a manager and experienced this kind of thing more than once as an employee. Certain words like "layoffs" or "merger" are so loaded because employees know that you know more than they do. Even if you think you're being totally transparent, employees are correct to assume that you're holding some things back because you are. It's your job to understand the state and direction of the company and give your employees the information they need to do their jobs. Employees, especially the smart ones, are going to try to infer additional information from what you tell them even when you think you've told them everything they need to know. Leaders need to be aware that a certain amount of "Kremlinology" happens in every company.

He made things worse by being vague about the company's real situation and contradicting himself a couple sentences later when he said, "...we'll survive (though not with much headroom..." If he's talking about layoffs, who is this "we"? Everybody? Rand and Sarah? If you're going to be transparent, you also need to be specific and direct. A better approach might have been, "Sarah and I modeled out some worst-case scenarios last week and this stretches our break-even point an extra six months, which will constrain our growth."

Speaking purely to the experiences of building a new software product, I've seen this exact story play out countless times. Everyone (except maybe the engineers themselves) seems to think that designing a software product is part of the "planning phase", and thus should happen before any time is "wasted" on development:

> "That product planning led to an immense series of wireframes and comps (visual designs of what the product would look like and how it would function) that numbered into the hundreds of screens..."

The biggest contributor to this I've seen is the dozens (hundreds? thousands?) of small ways that a design (done in a vacuum, without simultaneous prototyping) will differ from established development patterns, frameworks, and other pre-packaged solutions that engineers use daily to avoid reinventing every wheel. And engineers respond with timelines that expect to be able to leverage those frameworks. Thus the dissonance begins.

One example: a design calls for a form to be broken across 4 pages. There may be great aesthetic rationale or even user testing to support this, but that means that in all likelyhood any framework (e.g. Rails/Flask/Play/etc, not to mention native apps) will have to have additional modification to support sessions, changes to validation, changes to the auth domain, persistence changes, etc. And it's not necessary for an MVP. And many times these differences are much more subtle and deeply entrenched, and would require rethinking much of the wireframes/designs to align with development patterns. /rant

I'm not sure what the answer is here, except maybe that this is one more point in favor of having a "technical founder" or in general a technical person with decision-making authority, to avoid going down a road without proofing out your ideas or timelines.

One last comment - this post from Rand reminds me of the following from Ben Horowitz:

"By far the most difficult skill for me to learn as CEO was the ability to manage my own psychology. Organizational design, process design, metrics, hiring and firing were all relatively straightforward skills to master compared to keeping my mind in check. Over the years, Ive spoken to hundreds of CEOs all with the same experience. Nonetheless, very few people talk about it and I have never read anything on the topic. Its like the fight club of management: The first rule of the CEO psychological meltdown is dont talk about the psychological meltdown."

> "the funny thing is, Marijuana doesnt have any pain-killing properties. It just lessens tension, anxiety, and stress for some people."

Marijuana is an analgesic. But in this case the effects are stemming from the fact that's its an anti inflammatory, so that the fluid in your disc is no longer compressing the spinal nerves. And the fact that it reduces anxiety also reduces inflammation even further, since anxiety is probably largely what was causing the inflammation.

This is an incredibly brave, and hopefully cathartic post by someone I greatly admire. I really hope he is able to find the support and peace he needs.

As a bit of an aside, I wonder how much of this has led to similar troubles for other founders:

When the Foundry investment closed, we redoubled our efforts to build Moz Analytics. We hired more aggressively (and briefly had a $12,000 referral bonus for engineers that ended up bringing in mostly wrong kinds of candidates along with creating some internal culture issues), and spent months planning the fine details of the product.

I've heard from friends & colleagues about the massive amount of pressure they've felt after closing an investment round. While fundraising is already an incredibly trying process, the next stage is sometimes even more difficult.

In contrast, other friends & colleagues who've opted for the bootstrapped route (either by choice or circumstance) haven't seemed to face a similar massive amount of pressure. Yes, they faced incredible stress too, but not to the level of those that have raised capital.

This is merely an anecdotal observation made in my peer group. I don't mean to imply that this is some kind of phenomenon. And clinical depression is something that can cut through any kind of circumstance.

I just can't help but notice the stark difference in stress level of founders who are growing organically & carefully vs founders who are in a mad recruiting rush and sometimes hire the wrong kind of people. I wonder how much of a relationship there is between having the right kind of people in your company vs the wrong kind of people, and the stress level of a founder. I would imagine a lot.

I love it when CEO's own up like this, it's probably one of the most appealing traits in a leader I personally can think of. As long as they don't become too insecure to actually lead, introspection and self-criticism are strengths, not weaknesses. Besides, being aware of these traits and their negative repercussions put you in a pretty good place, the ones who really suffer are the guys who repress and deny the down slopes, always happy and bubbly on the outside but in reality inches from a mental breakdown.

The last part about how stress causes physical health problems is very important, and very overlooked. Besides the muscle and nervous tension the OP mentioned, stress seriously reduces immunity which can manifest itself in a myriad of unexpected ways (whichever subsystem fails first), from infections to cysts and all kinds of nastiness.

I respect Rand and give him a lot of credit for vocalizing his challenges. Depression is a challenge and it can be overcome.

I am not a doctor, but I can tell you that a lot of my peers are suffering from depression from business, marriage or just in general.

One thing I do know is that the world has changed a lot in the past decade. The price of everything just keeps going up and we are constantly bombarded by information. Humans are not built that way. There is no badge of honor for being under stress 24/7. It will catch up to you one way or the other.

Humans suffer from the fight or flight responses that we encounter during high stress situations. The challenge is to digest it and make decisions not based on fight or flight emotions.

The body produces cortisol when we are under duress and it is horrible for you. It screws up everything with your body and your mind. One way to counteract this is by working out, getting sunlight, eating the right foods and staying off caffeine. Try some black or green tea instead.

30 minutes of working out will combat cortisol production for about six hours. Even going for a walk helps a lot.

Most of the worlds brightest minds and most successful people suffer from depression and knowing that your ARE NOT ALONE is a huge step forward.

You can beat depression and your life will turn around!

Talking about it and seeking help is definitely a step in the right direction. Keep your chins up.

My comment is more of a meta one about HN. Are we really that interested in these stories of depression? We seem to get at least one a week. I realise it's an issue that may affect people here, but I'm not sure if we need the volume we are seeing now.

Mental illness impacts more people than cancer, diabetes, or heart disease. Unfortunately only 1/3 of people who have the illness get treatment due to cost, access, stigma, etc.

We're working on an app that uses technology to help bring clinically proven treatments to market at a price point that dramatically improve access. We are pairing this with product design that's common on the consumer web but uncommon in mental health apps to help with adherence and engagement with treatment.

I hope this isn't perceived as attempting to capitalize on a serious thread. We (the founders) have incredibly personal reasons for perusing this problem. Many in this thread are likely ideal early adopters for the product. The general awareness that this discussion is raising is a good opportunity to reach out and ask for help as helping us will ultimately help many others.

Forgive my ignorance and bluntness, but reading the above, it sounds more like an anxiety disorder than like depression. Both are serious, but I'm not sure if it helps to confuse the two?

I've not experienced either seriously, but I know people who have. Depression seems to be more about things not mattering anymore, everything being pointless, the world seeming drab and just not fun anymore, rather than feeling that everything is going to go to shit. Anxiety, though, (and I'm speaking from experience here, having had some light anxiety attacks caused by too much regular caffeine usage) seems to be characterised by a feeling of impending doom, that everything is wrong, it can't be fixed, it's all hopeless, etc. But in my (mild) anxiety attacks, like Rand, I still cared about the outcome. I just felt like there were too many problems to solve, overwhelmed, ready to say "fuck this", give up the entire thing, and start again from scratch with something completely different.

PS: Otherwise, props for the very honest and open article. Running a business is a lot of responsibility and very stressful and it can be comforting to know you're not the only who seems surrounded by world-ending scenarios.

Rand, if you're reading this, two things occur:1 - you're far from the first person to go for big-bang software releases (though listening to your cto is probably a good idea)

2 - in _Fooled By Randomness_ by Taleb (I believe, I could be misremembering) he describes the incredible level of stress that monitoring his investments daily created. I seem to recall the author writing that he simply was unable to monitor them every day and instead had to only look at some periodic summaries. Perhaps this may help people who get to mentally exhausted looking at numbers daily? I mean, it's good to notice immediately if they crater, though that can be scripted. Beyond that, there's probably not much value looking at them 7 days a week that you don't get looking at them once every seven days. I use the same technique on the elliptical machine; time crawls if I look at the timer, so it's an exercise of will to go as long as possible before looking.

Here's Chris' comment from when this was posted (but didn't make it to the front page) yesterday:

---

malgorithms 22 hours ago | link

I think the great thing about IcedCoffeeScript + the ESC library is how it fits into more complicated flow logic, while allowing easy refactoring. Max doesn't really get into the otherwise impossible examples in his post.

Even if you're firing off RPC's awaiting in the middle of a loop or switch statement, you can move logic around just by shifting individual lines. Consider how simple this looks:

This is a really nice alternative to Promises, which I have grown to really love in the last few months - especially since bluebird (https://github.com/petkaantonov/bluebird) handles errors really sanely.

It takes a lot of getting used to but it is really, really nice to be able to surround error-prone code in what amounts to a try/catch, with a single catch that can (like Java) branch based on error type.

For a second it feels like sync programming.

As cool as ICS is, I strongly prefer the yield syntax to the await/defer syntax. It's a single keyword and it's much more obvious where the value is coming from. Plus, it's going to be everywhere soon.

At keybase, do you guys intend to continue with ICS or eventually move to ES6?

I switched to FreeBSD a couple of years ago, partly for the sake of ZFS which is a first-class filesystem on that platform. FreeBSD was much more similar to Linux than I expected, and where there were differences, the FreeBSD way was usually simpler. My system has been stabler ever since, and I no longer fear to hit the "update" button.

Thanks for the explanation of misdirected writes. I've heard the term before, but didn't know exactly what caused it. Reading this post was like watching one of those How Things are Made shows on the Discovery Channel. Very interesting to see how some things I take for granted actually work.

"In the case that we have two mirrored disks and accept the performance penalty of the controller reading both, the controller will be able to detect differences, but has no way to determine which copy is the correct copy."

If you 'seed' the checksum algorithm for a block with the block number being written, a subsequent read of a different block that produces the same data will have a checksum failure. That would make it possible to choose which block has the right data.

So, if you are willing to eat the performance, you can detect single misdirected writes.

>ZFS is operating on a system without an IOMMU (Input Output Memory Management Unit) and a malfunctioning or malicious device modifies its memory.

If a Linux system possessing an IOMMU was booted with iommu=pt as a kernel command line option, does the IOMMU still protect from this type of failure? This option puts the IOMMU into passthrough mode which is required to successfully use peripherals on some motherboards.

I found the Reordering Across Flushes section really interesting. So one rule of thumb is that you should not use hardware RAID with battery backup? Are there other types of hardware that would give you the same problems?

I think everyone should watch Orson Welles' F for Fake. The main theme of the documentary/expos is that art is too rarely appreciated for what it is, but more for who made it. And everything went downhill as soon as "experts" started claiming that they could tell a "real" painting from a "fake" and acting as if the fake weren't also valuable as a work of art. If we just got rid of the experts...

Some very skillful work, though I have a hard time imagining a museum accepting such copies as originals, at least assuming they had a print of the original to compare with. The basic structure is there, and pretty perfectly from what I can see, but some details differ. What kind of review process goes on at museums in these situations?

Nero was dubbed tyrant because the media at the time - that is, senators who could write and publish disliked him. The dislike is understandable for his execution of Seneca; one of them. Tacitus hated him to guts and later Christian history dubbed him as the worst tyrant. The infamous Damnatio Memoriae(the worst possible punishment to a public figure at the time; erasing all evidence of the person's accomplishments) was held against him as well.

But if you think about it, He never really messed up as a role of Princeps. In short, Augustus' princeps were given two responsibilities: national security(as an Emperotor) and food supply(As a 'Caesar Augustus' and 'Princeps'). Food supply was pretty much guaranteed in his tenure, apart for one turbulent season.

For national security, many give Corbulo the credit to resolve the conflict with Patria(i.e., letting Patrian prince take throne of Armellian, but have Roman authority give ceremony). While it was a drastic change in Roman politics, Nero was the one to give the final say. He was unstable, but he wasn't stupid.

All northern defence lines were peaceful(i.e., didn't have a major breach) and conquest for Caledonia(Britain) was ongoing. When Judea revolt came about, he sent possibly the best card against that(Vespianus, who later wins the civil war after Nero's demise).

His demise was a complex one; he lost respect from populus due to his murder of Octavia and his mother(authority), and support from senators(justification to power), and finally, from the armies(military power) all at the same time.

Compared to mediaval then later post-Renaissance divine rights, Roman emperors' powers were far more justified, and consequently, liable to be checked.

I'm actually surprised LinkedIn's widget doesn't say "X people viewed your profile in the past Y days" while constantly modifying Y so X can change without actually changing... since that's one of the many shitty attention whore things they do with email alerts.

I haven't read all the responses, so I hope I am not repeating anyone's insight. I'm finishing my PhD in biophysics, and I wanted to share my perspective from conversations I've had with my boss/PI and other investigators. In life science research, there are huge incentives to not repeat others research and furthermore to not publish negative results.

The first disincentive comes from funding bodies: NIH et. al (NIGMS, NIEHS, ...) don't like to pay for you to do "someone else's science". If you manage to get a grant, and it comes out in a progress report that you did repeat too much of other people work, be prepared to get that funding reduced and or cut.

Academic departments strongly discourage new hires from publishing negative results and /or repeating other peoples work (mostly because this will likely decrease chances of getting published and funded).

Academic journals hate to publish negative results, but seemingly have no problem publishing bad science (yes Nature, I'm looking at you: http://retractionwatch.com/2014/09/11/potentially-groundbrea...). Early in my PI's career, she tried to publish a very important negative fining in a high impact journal. The article's acceptance was accompanied by a personal letter from the editor urging her to consider other journals for negative results.

Another barrier quite honestly is ego. While it may sound as if my boss is "one of the good ones", alas, she is not. On occasions that I have asked to repeat other group's seemingly unbelievable results myself, I've been flatly denied on grounds that this kind of work does not express the sort of originality of research produced by her lab. In other words, nobody wants to be known as "that lab", the nay-sayers of the field, those that would dare to question a colleague's ideas.

Finally, this lead me to the last barrier I have observed: scientific communities / societies. If you are of the lucky few that end up publishing negative results of major significance, prepare to not be invited to dinner at next years Society for X annual meeting. Yes, in many ways life-science is stratified just like high school. You have the cool kids on track for the nobel, the weirdoes in their corner pushing the boundaries of what is possible, the "jocks"/ career scientists who manage to turn a couple of tricks and some charisma into a living, and finally the tattle-tales who seem to piss everyone off with their negative results. These are HUGE oversimplifications / generalizations, but I really think that all of these barriers need to be addressed in some way to fix life science.

In physics, we have been aggressively publishing negative results for decades. There is an entire field dedicated to such work, called "Physics Beyond the Standard Model" (spoiler: there isn't any). I've seen entire careers of extremely good experimentalists dedicated to "failing to reject the null hypothesis" at more an more stringent limits (neutrinoless double beta decay is a good example of this) and have been to week-long conferences where every single paper was either a crazy theory or a negative experimental result.

I left the field 15 years ago because I didn't want to spend my career measuring zero, and presumably over time it will eventually dry up. The conditions for its existence seem to be more to do with having a highly trained group of people who have exhausted all plausible avenues of research in a given area and are left chasing a few scraps. In areas where there are still plenty of positive results to be had, the tendency will always be to emphasize the positive.

As a partial solution to this tendency, in my applied physics work, where I did get positive results, I tried to include a section in papers entitled "Things That Didn't Work So Well" that sketched failed approaches to save other people the trouble of trying stuff that seemed like a good idea but didn't pan out... at the very least we should expect that from the average publication, and be suspicious of any experimental paper that does not include some description of the blind alleys.

I have always had a huge problem with the non-reproducible nature of medical research and its acceptance within the field. Being in medicine, every day you hear a physician citing some study from 15 years ago conducted on a sample size of 20/50/100 or so patients as a way to justify their clinical decisions. And it always worries me that we tend to put so much faith in these "landmark" studies, as if their findings are somehow legitimate and true because statistical significance was reached at least once, and seem to forget all of the misaligned incentives and game-playing that goes on in research, and the amount of data burying by the FDA, all that almost certainly influence and manipulate data, publication, etc in a negative way.

I tend to take the majority of medical research with a grain of salt, for the reasons listed here and in the article, unless there's some very convincing meta-analysis or successfully reproduced evidence. Call me overly cynical, but that we calculate a parameter or administer a drug or change our methods because of some article you read last month in NEJM is beyond bogus.

The Journal of Articles in Support of Null Hypothesis collects experiments that didn't work. Not very much volume, not a huge area of prestige, but there should be no shame in publishing there. The content is very diverse and pretty fun.

Titles like "No Effect of a Brief Music Intervention on Test Anxiety and Exam Scores in College Undergraduates"; "Parenting Style Trumps Work Role in Life Satisfaction of Midlife Women"; "Does Fetal Malnourishment Put Infants at Risk of Caregiver Neglect Because Their Faces Are Unappealing?"; "Is There an Effect of Subliminal Messages in Music on Choice Behavior?". Plenty more cool stuff.

I don't think they are aware of how much bad science is out there and how many people are trying to publish it. It wouldn't be a journal every month, it would be a phone book every week. Corporations would simply drown out real science with papers designed to support whatever narrative they were promoting.

Where this may make sense is when Watson grows up, and you can aggregate the volume of garbage to fill in the holes of knowledge. But that's more than a couple years off I suspect.

There's many more issues distorting or distorted by academic publishing. For example, grants being awarded to the most productive members. Sounds fine at face value, but compare biologists working with ecological systems and biologists working with DNA. In the first case collecting data by definition has to take decades, in the other it's a matter of hours these days. The ecologists needs more money because it inherently takes longer to research, yet the grants are more likely to be awarded to the DNA researchers because they publish more.

DISCLAIMER: I'm not a biologist myself - this is a second-hand story from a biologist friend so if the story doesn't hold up under close scrutiny, my apologies.

The suggestion in the article to pre-register trials is a good one, but I'm extremely wary of a more general effort to "publish rejected research" because there is a huge quantity of very poorly conducted research that really does not deserve publication. Most fields are already drowning in a sea of journal articles - few researchers are aware of all the published studies that might be relevant to their own work - and greatly increasing the quantity of published material will dilute the pool even further.

Replicating findings should be given higher priority, pre-registering methods and analyses should be encouraged/required, but it's important to stop short of "publish all the things".

We kind of think of science as an embodiment of modernity, and therefore modern. What it is though is an institution, similar to academia and fairly old. Human institutions take time to change.

In any case, I'm pretty excited that it's coming under pressure to improve. Publication is really a method of communication and the revolution in communication of the last generation is a profound step change in human history, in my opinion. To use some terms that our great predecessors would have been comfortable with, science is a way to uncover the truth using light. Experimentation, debate, publication, review: these are all ways of making light.

Bringing modern communication into science and the collaborative opportunities inherent in better communication is a potentially very bright light.

Reproducibility and negative results are two parts of the same problem, and fundamental one in science since the beginning. A better method for solving it (using a computer (: ), is probably coming. If not now, within ten years. Maybe twenty. Soon.

There is the big problem now of negative results not being published, mainly because of the competition for federal (government worldwide) funding does not lend itself to proving something "otherwise".

A lot of this has to do with incentive structures, especially in Life Science research. The grant landscape is intensely competitive, and writing up results is incredibly time-consuming. There is little incentive to take the time to write up and submit negative results to relevant journals. If institutions and grant committees were to require this practice, it wouldn't be nearly as big of a problem.

The process generally seems a bit broken what with the concerns in the article and with Elsevier making everything only available to those who pay loads. You'd think you could have something like arXiv and a rating service to figure out which research was good and worth reading and should be rewarded career wise. Something for a YC start up to fix?

There are two issues here - (1) Irreproducible research and (2) Negative results. (1) is clearly a problem and must be dealt with by scientific community and processes. Let me talk about (2).

For negative results to be published, they too should follow basic patterns of positive results - innovative, scientifically rigorous. There are always more negative results possible than positive results. A negative result should be something which people think would intuitively work but wouldn't. For example - an apple falling from a tree and floating in thin air isn't a negative result because we all know that it's supposed to fall down to the ground via gravity.

It is tragic to imagine the amount of time wasted by repeating the unpublished experiments of others. It is even more tragic to imagine that someone might be able to gain a hidden insight by finding the gaps in various negative results, which might remain undiscovered for a long time otherwise.