If only there were another woman to whom very similar events had happened at GitHub...someone well-respected, like, I dunno, the very first hire they ever made...if only she had an incredibly similar story to tell that ended with her leaving and then winning a settlement from the company for the abuse she suffered...OH WAIT

"We've confirmed with a GitHub employee that "the wife" is in fact Theresa Preston-Werner, making her husband complicit in covering up (or at least condoning) repeated allegations of harassment and abuse at the company he helped create. We're told this is certainly not the first time the Preston-Werners have treated a female employee this way: Melissa Severini, the company's very first hire, was allegedly paid to sign a non-disparagement agreement after being victimized by Theresa Preston-Werners and subsequently terminated. Other employees have been pressured to do pro bono work for Theresa Preston-Werner's own startup, Omakase."(From ValleyWag, March 17th)

According to Severini's Twitter feed (as of early April), the "investigator" never contacted her.

Horvath's response: "[My claims] are now, more or less, substantiated."

Is she delusional? The report doesn't substantiate her claim of sexism AT ALL. They couldn't even find the malicious code removals by "the guy who she wouldn't let fuck her" that was one of her central claims.

A lot of the press I saw about this case focussed on allegations of sexism.

However reading the release, the gender of the injured party seems to be independent and unrelated to the injustices which relate instead to the apparently inappropriate behaviour of TPW's spouse and her unsanctioned involvement in the company.

Sexism is a highly incendiary issue in this community. How did a case which apparently was not about sexism come to be so closely related to it? Is the report from Github incorrect or was sexism used to market this to the watching population?

I'm glad they're flatly acknowledging the wrong doing on what compromised the majority of Julie's complaint.

As to her assertions that an engineer was keeping her out of the code if she wouldn't sleep with him, I've actually lost sleep trying to figure out how you'd handle that complaint. Where would the evidence trail be? Innocent until proven guilty but also wanting not to assume a women is lying about sexually harassed, how do you balance those two things? Given what we know about sexual harassment, it's a horrible idea to operate on the assumption the accuser is lying, but when it's just word vs. word, what do you do? Presume guilt? Can those two employees co-exist? Can you fire the accused even if there's no proof (and what proof would there be? Commit logs aren't much.)

Anyone have suggestions? Ideally you want to build a culture where this just would never come up and where nobody would tolerate it, but I just don't think that's ever going to be fool proof.

Isn't it entirely possible that the situation with TPW exacerbated the situation with the Engineer and the toxic workplace environment? At least in Julie's eyes, since she received the brunt of TPWs inappropriateness?

I'm sure that if I was under that kind of stress from one of the Founder's of the company I was working for, it would most likely contribute to me seeing other encounters as something that it wasn't...

If I'm missing something, let me know, but it seems to me that GitHub did about all they could with this investigation (respected outside investigator, seem to be releasing as much as they are able to, although if they stop releasing new information/new stuff comes out through other channels, my opinion will change). They've admitted that TPW's conduct was unacceptable and are trying to correct that from happening again. The allegations about sexual harassment and locking Horvath out of code seem right now don't seem to have any merit beyond Horvath's allegations. Also, Horvath seems to have accepted the outcome, simply noting that she disagrees with the findings, but not raising issues with how they were arrived at or trying to impeach the investigator's credibility. Her belief that none of this would have happened were she male is an opinion she is entitled to, but is basically impossible to prove at this point. It's unfortunate, but, beyond GitHub having a time machine to go back and try to fix this as it happens, I'm not sure what else anyone expects them to do. My hope: GitHub continues to release what they can, they improve all their employee mediation processes, everyone agrees that they made big improvement to their internal culture, and that eventually, this whole ordeal is relegated to the annals of a Wikipedia subsection.

This is a really sticky issue, however I can't help but feel that Julie is handling this quite unprofessionally. She has been posting for the last hour on twitter about how she is satisfied with the blog post, however feels that everything was wrong, and she's actually going and naming names. (other than TPW), which is completely unfair IMHO.

She says that she strongly believes that this would have not happened to a man, and the report says that they found no evidence of sexism in the workplace. This has been brought up before, but is it at all possible that some of the treatment she received was because she is difficult to work with, rather than because she is a girl? Nothing in the report points to anything having happened that was specific to what JAH talked about, could it in fact be the case that people are treating a sub-par, or difficult employee (in their own eyes) differently to others around her, and in her mind, in (what I asssume is) a predominantly male workplace, she sees this as discrimination? I don't know what to think, but unless there's any evidence either for or against, I think the only reasonable assumption that we can make is that there may have been fault on both parties sides, the extent of which is unknown.

I was reading this intently to find an answer to the central question that was alleged and provable beyond reproach.Did another developer "...rip out code" from the git repo due to personal issues?

Almost every thing else could devolve to he-said-she-said (or she-said-she-said in this case). However code commits (and subsequent reverts) are extensively tracked.

Assuming they can roughly pinpoint the time window involved I genuinely wonder how they conclusively proved that it did not happen.

If it were true, then that engineer should have been immediately terminated FOR CAUSE and sued in court. Also given the fact that he is still employed (and presumable just recently promoted) would have made GitHub look much worse. Regardless, it looks like that allegation was unsubstantiated,

In regards to Julie's statement of GitHub being a "toxic workplace" and the findings of this investigation, I think what we're seeing here is how microcosms of culture can affect the whole organization.

When I worked at a large corporation, it was easy to find people who absolutely loved the culture of the company, and people who absolutely hated it. If you drilled down with each person, it would come down to their personal experiences with their everyday team members. Daily interactions are often extrapolated to the culture of the company as a whole, for better or worse.

So I think both Julie and the report are right. For Julie, her experience was very horrible, and since it involved a cofounder, it's no surprise at all that she saw this as a systemic part of the GitHub culture.

And if the other female employees did not have to endure similar experiences, it's no wonder that others do not share the belief of GitHub as a toxic workplace.

The takeaway for founders and leaders: culture can't just be set from the top, it must be reinforced at every level of an organization.

I think this is a much stronger response than the previous one for the following reasons:1. It is a lot less mysterious. It tells us exactly what steps they took, and why they felt the investigator was appropriate.2. They do admit faults and weaknesses, especially on the truly bizarre part of the original claims (i.e. showing up at a company where you don't work). If that part had been denied, then something was factually missing.3. The part where they do claim they were not at fault -- the engineer's work -- doesn't come across as defensive.

I don't know how they could have done a better job. (I am not saying they could not have done a better job, just that I can't think of it).

> Employees were asked about their experiences here, good and bad. Women at GitHub reported feeling supported, mentored, and protected at work, and felt they are treated equitably and are provided opportunities.

I'm glad to see the claims of sexual and gender based harassment don't seem to be true. Perhaps larger companies should make it part of their culture to have a periodic audit similar to this investigation?

And unfortunately this will change no one's opinion. GitHub got tried in the court of public opinion and was sacrificed.

Short of having audio or video or evidence those who backed GitHub feel vindicated, those who backed Julie Ann Horvath will see this as the "Mad Men" culture of Silicon Valley exercising their muscle to supress the story. The small but sane few in-between who were waiting for the full story will never get it because the other two sides are just going to throw hyperbole until the cows come home.

I wonder if we will ever be able to have a sane conversation about the "sexism in the Valley" or the fact that it concerns itself with something that is so technologically intrenched the mob mentality of the Internet will always draw centre stage.

This sounds very snide, but honestly, I don't think the people collectively calling for Github's head have given them any other choice. It's clear that people are unwilling and unable to listen to the facts of the case. They have their pre-set opinions on what happened based off of one side of the story. At this point, I'm not sure what Github should do. They have done their best to give lip service to these complains. Here is the first comment already:

"When you've dug yourself into a hole you should stop digging"

I agree. I don't think it's worthwhile for the Github PR team to continue to address and give legitimacy to those voices who are beyond reason and simply want to express anger.

So bottom-line: the company is not a haven for male brogrammer types, the founders fucked up and please engineers don't let this scandal affect you.

Still, there's a bitter taste left here. The whole situation remains somehow murky and nobody has benefited from this "clarification".My personal opinion is that if you have a disagreement and there's grounds for legal proceedings, you should start the legal proceedings. It might be an uphill battle, it might eventually prove useless, but at least it's a final answer to a question.

My guess is that this public storm did not help anybody: neither Julie, nor Github, nor the movement against sex discrimination in the workplace. Everybody still thinks they were right and no "final" solution was shown. And most importantly, with regard to the real issue (the discrimating treatment), nobody DID nothing. The founder just quit (without being accused of much and without "paying" for anything), Github is still the best place in the world to work in (as it was before this thing happened) and somehow Julie seems to be a little more paranoid than before (hey what can we do, an independent expert did not confirm her story).

I don't like the defense I'm seeing here of "Oh, we're not talking about outright sexists, just equal-opportunity jackasses. Therefore, the claims of a sexist environment don't hold".

I've worked with very few outright sexists/racists/whatever-ists in tech (and judging by this discussion, they definitely exist, they're loud, but the vast majority of us have a consensus as to what we think of them). But I've worked with a large number of people (and occasionally/regretfully have ventured into this group myself) who can be described as "equal opportunity jackasses".

The reality is this - the damage of being an "equal opportunity asshole" is felt more strongly by marginalized groups. When a white male is mistreated by $jackass, the thought process goes like: "$jackass is being a jackass to me.". When a member of $marginalized_group is mistreated by said jackass, the thought process is: "$jackass is being a jackass to me. Is it because I'm a member of $marginalized_group?" and every future interaction with that person is tinged with that thought process: "I'm still $marginalized_group... Is $jackass going to be a jackass to me?" The motivation of the jackass is irrelevant - they creates a space where the negativity falls more heavily on marginalized groups - that is, behavior that isn't overtly discriminatory creates a space that is.

Can we please try being nice to each other for a change? Not just because it creates a more inclusive environment, but because it creates one that's a whole lot healthier for all of us. We in tech idolize a whole lot of people who can probably be described as equal opportunity assholes (Jobs, Bezos, and Torvalds immediately jump to mind), but really - let's look at what our words and actions are doing to each other. I don't think there's a single one of us here who isn't guilty of this at some point - and I don't think any of us can promise never to be a jackass in the future. But let's not call it a mark of honor -- or behavior to be imitated. If you're being a jackass, don't justify it. Apologize for it. And try to do better in the future.

Does this report say that the investigator talked to Julie, or atleast tried to talk to Julie?

That was one of Julie's original complaints about the original post.

If I'm investigating a complaint by someone, it seems reasonable that the person making the complaint would be the main person to talk to. Otherwise its impossible to say you did a thorough investigation.

it's interesting how public opinion is always right (even thus it generally has none of the facts), and public opinion is also always looking to accuse 'guilty until proven innocent' corps - if possible, the ones that have been actually nice and caring to the said public.

Be it GitHub, Mozilla, what not - the last weeks have been particularly sad.

Regardless of what actually may have occurred at Github between Horvath and any other Github employees, in court the case would be dismissed on lack of evidence.

Her response on Twitter, on the other hand, will stand for all time as a testament to her character and I'm afraid it won't do her any favours as she comes across as having a strong sense of entitlement.

I don't think people should be expected to accept mistreatment. But it swings both ways - they shouldn't be allowed to believe that their point of view is any better or more right than anyone else's and the things she said smack of someone who is angry they didn't get what they wanted.

There were hurt feelings at Github. I can appreciate that. In the public eye, though, you have to handle it with dignity - you have to be beyond reproach. And I think the things she said on her Twitter feed have done a remarkable job of painting her in the worst light possible and put her credibility beyond repair.

I'm still sceptical. I'm not a woman but it sounds reasonable to think that woman have a hard time talking truthfully about how bad they are really treated. It is less likely for a woman to convince the public that something bad was really bad and even from day one of coming out the woman receives more hate than the man.

I also heard that statistics say, there are more false negatives (= woman said bad things happened but investigation says it's untrue) than false positives (=woman successfully lied about things happened).

All that decreases my trust in such a clear "false positive" investigation result, although I admire GitHub for publicly working on that issue. I hope that beside saying that they didn't do much wrong, that they increase whatever they are doing to keep the workplace safe and healthy for the female employees.

An agreement that a spouse shouldn't work in the office. Gosh, this is even worse then the original complaint. In California you can't have such agreements, this is marital status discrimination. I don't know what kind of job Teresa was doing, but even thought she wasn't on a payroll, being a volunteer or an informal adviser is perfectly acceptable. If anything might be wrong here, this would be mixing up corporations or such. But employees are in the company to do their job, not to control borders, and "this person is not supposed to be here" sounds territorial and toxic. You're busy, not open for certain interactions, tell about it. I suspect it wasn't Julie who got harassed, especially combined with spreading gossip. And I find it incredibly sexist consistently referring to a professional woman as a "wife". Compare to: "I insist that this engineer/marketer/director shouldn't be on the floor".

>Even so, we work in a world where inequality exists by default and we have to overcome that. Bullying, intimidation, and harassment, whether illegal or not, are absolutely unacceptable at GitHub and should not be tolerated anywhere. GitHub is committed to building a safe environment for female employees and all women in our community... I'm sorry to everyone we let down, including Julie. I realize this post doesn't fix or undo anything that happened. We're doing everything in our power to prevent it from happening again.

Why the apology on Github's part after it was concluded that there was no gender-based harassment towards the engineer in question? Is this apology pertaining to the other issues?

TBH if on one side I appreciated Github transparency I feel this story made the company very vulnerable to a point (of no return?) that I think will be hard to recover.

Also, I wish false allegations in this world will be treated in the very same way if were true: so instead of "We apology with Julie ..." I prefer "Julie said in public false allegations, we'll made a lawsuit against her"...

I mean Julie put tons of "bad" on top of a company without any proof (about sexism) and she exasperate (seeing the report) mostly everything.

Sorry, but mature and responsible people would look out for bad attitudes and bad workplace relationships since before the company begins operation.The only people who are "unprepared" to deal with abusive behaviour are those insensitive and morally corrupt enough to tolerate bad people and bad habits until actual trouble begins.

It sounds like external social pressure is forcing GitHub management to grow up all of a sudden, confirming the existence of serious problems.

While some have already shown this sentiment. Can I just say that we should all give mad props to github for taking additional steps to provide this transparency. They could have just ended with their original post, taken the windfall, and let the controversy die down. It takes guts to put out a statement like that (both legally as well as an admission of wrong doing) and I think it shows that Chris Wanstrath is the right person to take lead of the company. Removing a person (or couple) who abuses their power like Tom Preston-Werner is probably for the best.

Based on the first two paragraphs of the "independent" investigator website, http://www.ryaa.com/ the "independent" investigator seems to be biased in favor of the company. It could be argued that github is using their CEO as a scapegoat in order to avoid having to confront a possibly sexist internal culture. I wish companies and individuals were not afraid to address their sexist cultures or thoughts. Living in post modern society, it's almost impossible not to have a sexist thought - it's what we do with these thoughts that matters. I look forward to seeing the new initiatives github will be launching, and hope the initiatives will bring about real change in company culture, and cause people to question their beliefs. Meanwhile, I'm trying to decide if I want to switch to a different company to host my code. Any ideas?

Can we now stop touting immature companies and bad employership as the future corporate organization? Please?

When you put people together in an organization, shit happens. There has to be absolutely no malice involved for things like this to happen. Shit just happens. People fuck up, and sometimes their fuck ups hurt other people. I did it, you did it, we all did.

But not creating a decent company structure to deal with that reality and instead make your company an experiment in social Darwinism under the guise of "meritocracy" and "no managers" is utterly irresponsible. It's begging for shit to escalate and people to take advantage.

If you have over an x-number of people in your company, you need people to manage that, which has fuck all to do with hierarchy.

So Github "recently we hired an experienced head of HR". Over 200 employees, over $100 million in funding, and now they finally fucking bother to hire someone to look out for them.

I'm not so much pissed off at Github as I am at the entire tech community that have been cheer-leading running a company so badly.

"People are their own managers" works fine when it's just about work. But it isn't. It never is, because they are people. With all the dumb, stupid and sometimes ugly stuff they do besides just work.

Employees are not lab rats.

P.s.

And don't use "rapid growth" as an excuse. It was an ideology. Github's leadership didn't believe they needed to be prepared, that somehow these things would magically not happen in their happy start-up commune.

"Furthermore, there was no information found to support Julies allegation that the engineer maliciously deleted her code. The commit history, push log, and all issues and pull requests involving Julie and the accused engineer were reviewed"

isn't one of the whole things about git as a VCS the ability to re-write its history?

Definitely firing was the right thing to do here. I mean, it's unacceptable, a CEO acting like an asshole, that's totally unheard of. There's no room for reproval or a 'three strikes' policy here, companies should immediately fire their founders for the pleasure of the audience. It shows character, morals and strength of leadership, which inevitably lead to robust success.

Based on the results of the investigation, it seems the sexual harassment / gender-discrimination allegations are bogus. What we are left with is a female employee who was systematically mistreated by the CEO's wife, and a CEO who was unable or unwilling to put a stop to it.

So what can we learn from this apart from the obvious deficiencies of said CEO and said CEO's wife?

1. Nothing causes more trouble at a company than two women who don't get along. I'm sorry to say this, and I know it's politically incorrect, but women are catty and it's very difficult to get women to work together without all sorts of drama.

2. Women in tech frequently play the gender card when their jobs don't work out. Whatever the reason for their resignation or termination, they are highly likely to perceive that their gender was a major factor in the outcome.

I love Firefox -- for what it's done, what it represents, and what it helps guard against. I have fond memories of those early versions of Firefox (nee Firebird) that busted open the IE monopoly, and where hands-down the best browser going at the time.

But it's never felt very good on the Mac to me, and it still doesn't. Here's a few early thoughts on this release, from the perspective of a happy Safari user, w/ a pretty (nit-)picky eye.

* Separate address and search bars is old-fashioned. As a user, I don't want to have to make this distinction, and it's hard to imagine most users-on-street wouldn't find this confusing

* The address bar is square edged while the search bar is round edged, which is displeasingly visually. (I realize this is because there's a convention of "search bars are rounded," but the inconsistency remains.)

* Tapping the hamburger menu on the far right, it appears with a combined drop-and-fade-in effect, and then disappears instantly.

This is jarring on the Mac, because it is exactly the opposite of native menu behavior, which is to appear instantly, and disappear with a fade. (I also believe the native behavior makes more sense: when you're tapping a menu, you want to do something, so you don't want to be slowed down by an animation -- just show the menu.)

(Addition of a hamburger bar on the far right at all is suspicious; often it's a UI "dumping ground")

* The "what's new" slideshow that appears at the bottom of the screen has to be controlled by clicking small <- or -> arrows, instead of just scrolling, which feels very outmoded

* The scroller applies a fade effect to incoming content, but only to the text, not the image, which is jarring.

* Multi-touch swipe to go back/forward shows no feedback! (Safari does this best, where the whole page slides away, revealing what's underneath; Chrome does a half-assed thing with arrows fading in, which isn't nearly as nice, but at least better than no feedback.)

Pretty nitpicky, I know, but I recently read an article trumpeting this release of Firefox's incredible attention to detail.

If you want a highly customizable browser, Firefox is it. Chrome is in its infancy when it comes to customization and they often make decisions that prevent power users from taking advantage of their browsing experience.

For example, they've disabled custom stylesheets in recent releases despite a clear indication that people were sharing themes, they have very old bugs that don't get resolved (like the stupid white flashes on dark themes), major accessibility issues.

Generally they try to appeal and prioritize regular users (which is fine) but go out of their way to make decisions that ignore power users and not even provide alternatives intentionally.

Finally and the most frustrating part is they don't value feedback. https://code.google.com/p/chromium/ is a joke and a waste of time. The most starred issues are often closed to the public when it reaches a certain level and users are asked to submit a new bug again if the old one is not fixed. This means that if there is still a bug, you have to wait months before other users experience it, find the time to search for the bug and star it, reach enough stars to get attention and then get a response. Bugs are often miscategorized and the wrong team has it in its backlog. It's a mess.

There isn't a feature in Chromium or Google Chrome that Firefox doesn't deliver.

Take it from a serious chrome user and extension developer for several years, switch to Firefox if you want to tweak anything that bothers you easily without having to change the damn source code.

Overall I love the redesign, but I wish they would have compacted the top chrome a bit so it matches the height of other major browsers. Firefox has slightly taller chrome for no good reason, as seen in this picture:

It's still not as good as Chrome's ability to automatically disable cache while devtools are open, but its way better than what I had to do before, which was override the automatic cache management settings to limit cache to 0 MB of diskspace.

The main reason why I use Status-4-Evar is because when I hover over a link I don't want that link to pop-over the page content, which is what Chrome did first, then Firefox copied like sheep. It's distracting, like a tiny little pop-over in the corner of your eye.

I like having URLs show in a status bar separate to the main web window. It's out of the way, and I just like having my web browser framed by an interface. Is that so wrong?

What's so bad about a status bar? Why is there this idea that everyone wants the full screen web?

First Mozilla forces their CEO to resign, now they're being the soup nazi over the status bar which has been with browsers since day one of web browsers. You call that progress? I call it chopping down an old tree that nobody wanted chopped down.

I think it's beautiful. I can't explain exactly what I like (I'm not much of a designer) but I can say that I now find it more visually appealing than Chrome. The only reason I'm staying on Chrome is because of a few extensions and because it syncs so well with my Android phone.

I think there is a lot of hate here with the tone "I liked it the old way because I was used to it!". New designs change things, that's why they're new. If we didn't ever want anything to change, we'd still be looking at this every day:

Where the hell is my forward button? Why does browser.tabs.onTop not mean anything anymore? Where's my refresh button? On the... right side, inside the Awesome Bar? Why isn't a separate button? Why can't I put the Home button back beside the Back button? Why is the back button permanently part of the Awesome Bar? Why does the Awesome Bar change shape and size to allow the forward button to exist? Why isn't the forward button an element that I can move around like the refresh button? Why can't I move the Awesome Bar at all? What happened to the Status Bar? Why can't I replace the Status Bar with the Bookmarks Bar at the bottom of the screen? Why is the Start Button now permanently overlapping with anything displayed in the browser? Why can't I move the Menu Button? Why is your Menu Button right where Google Chrome's menus are by default? Also, when you remove the Title Bar, I can't grab the window because I keep my bookmarks as icons without titles on the Menu Bar.

Why are you messing with everything, Mozilla? Why are you breaking the UI metaphor? With the tabs on the top, all the elements under it are made to appear a part of that tab. There's no reason that I can see for the tabs to be on top. It doesn't look pretty, it takes extra pixels to render the smooth curve where the tab meets the next bar. Even if I'm insane and it's the same width, it still looks awful.

* Where is the "use small icons" option? This should be priority #1 to fix.

* Can't double click top left corner to close on windows.

* How do i get to the hamburger menu with only my keyboard?

* If my mum accidentally removes something from the hamburger menu, like options, and one day i have to guide her to that option over the phone there are like hundreds of steps to go through. First i have to figure out if she actually is looking at the right menu, then i have to figure out if she actually has the icon on her menu or not. When that is done i have to guide her to the customize menu, then pull the icon back, close the customizer, open the menu again and then click the icon. These steps will all be different depending on how your mum customized the menu. Previously i could just tell her click the top left menu and go to add-ons. There is no standard path to follow (except the alt-key, down-key keyboard fallback still using the old menus from version 3). The hamburger menu should be a shortcut for your favorites, not the only way to find an option. Seriously, it's as if Windows forced you to add a control panel shortcut on the start menu before you can access your network settings. This is trying too hard to be too user "friendly".

* How do i find the dropdown menus for my add-ons? Seem like the only path to get there is again to use the hidden alt, down keyboard fallback and then go to tools.

Not super relevant to the design, but does anyone know which browser (Chrome or Firefox) uses less memory on a Mac these days? Not talking about base footprint, but let's say I have 25 tabs open. Chrome uses ~100MB per tab, more for long-running tabs with a lot going on like GMail. Seems totally insane to me, a website that's not a complicated web app should have a tiny memory footprint. If Firefox could significantly improve this I'd move.

I expect that this release will mark the point of rapid decrease of Firefox usage. Lets wait a few months and see. Such UI changes are. The best way to irritate the users who actually used the browser and got used to the placement of the controls. It seems as the designers themselves at the same time used Google's Chrome and now "unified" their own experience. Well at least now they removed some reasons for users to not switch to Google's browser.

* I just tried and your developer tools don't cause the tab to freeze for 1 second every time I switched into the tab.

* Wanted to see the addons I had, clicked the menu > nice icon that said Add-ons - massive UX win.

I feel at home with this browser. You stand up for privacy and for that you deserve so much more praise than you get. I'm switching to Firefox and see how it goes. The only thing keeping me on Chrome are the Dev Tools, and I want to give Firefox another try.

* Menu bar is hidden by default on Linux and can be opened with Alt (that didn't work before).

Bad:

* Menu ("Firefox") button is still not movable for no obvious reason.

* Reload/Stop button is now forced to be in the URL bar (before users had a choice where to place it). That's pretty annoying, it's very uncomfortable when Back/Forward and Reload buttons are so far apart.

* it's no longer possible to open a new tab by double-clicking in the tabs bar, seriously guys...

* I really liked the Ctrl + / status bar, I used it to drop my add-ons icons, it didn't use much space and I was able to use them whenever I wanted, now I can't do this anymore (except if I overload the top bars with icons I use... say once a week ?) // EDIT: I just figured out I can use the sandwich menu to do this, that's pretty cool

* it's not possible anymore to move the refresh button... WHY ? I loved it on the left with the previous/forward buttons, why would it be in the URL bar ?

* and it seems it's no longer possible to add a button to show the bookmarks (Ctrl+b), there's only this awful double button with "add fav" and this useless menu.

It's beautiful but lacks a lot of customizations that were possible before... I'm really considering switching to another browser (maybe Opera ?)

I've always had a bookmarks bar of single and double letters, like R for Reddit, Y for YCombinator, F for Facebook, $ for stocks, etc. Now with FF29 I have these blasted file folder icons next to each of my little codes, cluttering up the whole bookmarks bar.

I can't find any way of getting rid of the file folders -- not even thru Customize. Anyone found a way?

I must say I am impressed. When I saw the screenshots of the new UI I thought that I would not like it but now that it's finally landed in stable I find it really nice. I was afraid that the new tabs would behave too much like Chrome's (when a lot of tabs are present they all become smaller), I'm glad this is not the case. I also think that making only the selected tab curved and keeping the rest a straight shape was a good call.

Apart from the UI changes I noticed that memory usage went down considerably. But we'll see how it behaves after some prolonged usage.

The one weird thing I noticed was that my bookmarks are gone. I'm not sure if this happened in this release, or some previous one, or whether I accidentally deleted them somehow.

On an iOS device this page informs me firefox isn't available for iOS. There is no link to view the contents of the page, so I'm locked out of finding out what's new in Firefox 29 until I'm off mobile. Consider adding a "full site" link, or a "view desktop version" link.

I kind of wish they would focus on building out the bookmark functionality with some easier workflow, e.g., fixing functionality issues like only being able to accept auto-complete suggestions in tags with the right arrow keyboard button.

I definitely think bookmarks are highly undervalued. To a certain extent, I think curated and cultivated bookmarks even have monetary value.

Hmm, I think the separators between tabs in the new tab bar are too subtle, and overall it is too dark. The curved appearance of the selected tab also seems out of place.

edit: also, transparent elements with text on them? Noo! Maybe I am just sensitive.. I don't have the best eyesight, and I don't have the best monitors. In any case, I am thankful for Classic Theme Restorer, which someone mentioned elsewhere in this thread.

Just made the switch from Chrome to Firefox. I noticed that even on a high-end Macbook Pro, when I opened a large amount of tabs in quick succession Chrome would lag intolerably. It did so on earlier releases of Firefox but I just tried on 29 and it opened 40+ tabs with without a hick-up. Very pleased indeed

Edit: I just noticed that the text selection now works as it does in text editors with blinking cursor and all. Great feature!

With all the UI hate, I guess I'm lucky that F29 didn't break Side-Tabs, my favorite addon ever. It does look weird, though, to have the nicely rendered foldertab graphic for only one tab, and with nothing around it to continue the metaphor.

I really want to like firefox, but it is just so much slower/unresponsive than chrome. Is this something specific to my setup? I tried this new version out on a clean profile and compared it to chrome: http://fixme.se/pub/chrome_vs_ff.flv

Been using it for a few hours now. I hear a lot of complaints about the address bar having gotten bigger by a whole 10 pixels and the addons bar being gone, but honestly I think it's a great update. On a full hd screen I can't be bothered by the 1.1% increase in height and the addons bar has only annoyed me. Actually, it's a bit ironic to complain about 10px while at the same time complaining about an entire toolbar having disappeared.

One of the first things I notice after installing, the text on the Gmail buttons is blank (I can see the button outline but can't see the text so I have to rely on the tooltips). Kind of a deal-breaker.

Just a question, but why is the interation cycle so extreme with Mozilla? Version 29? What is so different from version 3.x.x ... where we had normal interations I could wrap my head around. The whole number upgrades are insane. I know its such a simple thing, but trying to relate with software interation steps on such a fast moving number, just is mind boggling.

I love Firefox, but gosh - when I have many tabs and windows open it's a major memory hog. Maybe all browsers are like that, but it feel particularly worse on Firefox, despite all their efforts to fix memory leak issues.

I really don't like the new UI. I miss the good old menu bar on top of the screen and i don't understand why firefox is trying to copy chrome :/. I wish there were an alternative to firefox and chrome, s it looks today I don't want to use either of them.

the lack of options is what kills me, also the UI is filled with clear design flaws, and i still need an addon-bar, now i have to use even more add-ons just to make it work like it used to... ohh but we still have the weak performance, thats far from a priority...

As I write this, the story is about one hour old, there are 118 other comments and the top voted comment - the top voted comment - is criticism by someone who doesn't use Firefox. The comment is totally without technical analysis of why Firefox does what it does nor does it mention anything that FireFox gets right. The only positive thing the author says involves dragging out some tired anti-microsoft trope.

That's the problem when stories have this sort of velocity. The quality of comments goes down to the point where "It doesn't try to be an Apple product" is what collects the most upvotes. It's little more than trolling for defenses of FireFox and Safari fan upvotes.

Open new tab. The recent sites are displayed. Enter a url and hit return. The website is displayed. Now try and click the back button to return to the recent sites list... You can't, the back button is disabled. Despite the profession of attention to detail this speaks otherwise.

Fucking idiotic morons. Completely ruining what used to be a good browser. I've had enough of this shit. Every single updated has to fuck up my Pentadactyl experience. That's it. I give up. If I can't use Pentadactyl, I have 0 reason to use Firefox. Goodbye. I'm removing it from all my computers and never looking back.

You can now ssh to that server as that user by doing "ssh $ALIAS" on the command line, without needing to specify the port or user with the usual command line arguments, or necessarily spell out the entire host name.

I've tried this before, and what effectively always happened (to me) is that as soon as I started copying a file, I couldn't continue working in Vim anymore until the file was done transmitting because the copying would eat all the bandwidth. There may be a flag or setting around this, but I've never found it. When I open two connections, it is usually fine.

Another recommendation: start an SSH server on port 443 on a server somewhere. Then if you're stuck somewhere on an untrusted network, one that blocks most outgoing ports or one that throttles non-HTTP ports, you can use SSH for tunneling and/or setting up a quick SOCKS proxy to get yourself encrypted, unblocked, full speed internet access.

I just learned about remote file editing with vim and scp thanks to this article, it's the only thing I didn't know about and... wow, it's amazing. This will make my life much easier every time I have to remotely edit some config files on my servers.

As for the rest of the article, really nice stuff. Nice tricks for ssh newbies. I wish he also talked about setting up a nonce system with ssh or move sshd to a non-default port to prevent attackers spamming port 22, or even remove password authentication altogether.

About the "Lightweight Proxy" (ssh -D), if you want it to be transparent to the application (not require SOCKS support), you can use my tun2socks[1] program. This is useful if you can't or don't want to set up an SSH tunnel (which requires root permissions on the server). The linked page actually explains exactly this use case. It even works on Windows ;)

One problem I have with SSH is DPI. Deep Packet Inspection seems to be behind the SSH block in place at a local library I work at. SSH out in any form just isn't possible there, even via a browser-based console (such as that used by Digital Ocean, for example). There doesn't seem to be a suitable solution to get around it offered anywhere.

My own fix was to use 3G to do the SSH work via a tethered phone and to use the wifi adapter to run the bulk of any other web traffic. It'd be great to have a workaround for DPI, though, if anyone has any experience there.

While the socks proxy does not require any root (local or remote), it is only useful for programs that support it - which are not many.

However, apenwarr's sshuttle https://github.com/apenwarr/sshuttle is a briliant semi-proxy-semi-vpn solution that, in return for local root and remote python (but not remote root), gives you transparent VPN-style forwarding of TCP connections (and DNS requests if you want). It works ridiculously well. Try it, if you haven't yet.

Trust (ssh password less login) is setup between me@laptop and me@F, and me@laptop and colo@A, and between all colo machine (A,B,C..). So colo@A can ssh colo@B w/o password.

I am able to log into colo@A via F w/o password as I copied the ssh key there manually. (path me@laptop -> colo@F -> colo@A)

QUESTION: Is it possible to ssh to other machines (B,C..) via A while assuming full identity of colo@A? (Path would be me@laptop -> colo@F -> colo@A -> colo@B/C/..) With my current config when I try to ssh to B it knows request is originating from 'laptop' and still asks me for password.

Is sshfs a serious replacement for nfs? I've got a Buffalo Nas at home that I use Samba for, but Samba is too slow to watch hi-def videos over. NFS seems to be a pain in the neck to get working on that particular device, and I hate using it on a laptop. I guess I should probably just try it, but I can't see SSHFS as being any faster than Samba.

I have a shell script that helps with setting up trusted keys: trusted keys help if you need to run automated tests, that involve several machines, or simply if you would like to skip typing in a password on each connection.

Everyone seems really concerned with the edge cases right now. What about insurance and liability? What about when X happens and it's raining? These are (usually pretty minor) technical challenges, and I haven't heard one yet that we won't be able to overcome with today's technology.

Under given circumstances the car will be safer than a human or it isn't. The moment it crosses that threshold (for most conditions) the world is going to change for the better. From there it's just a matter of optimization until a human watcher isn't even required.

Self-driving cars are worth every penny of research. They will some day be safer than human drivers. With a full network of communicating cars and fail-safes we could almost eliminate traffic-related injury and death. Some day your insurance company will probably charge you more per mile you choose to take control of your car.

Beyond safety, this could make life way more convenient and make living far more convenient. We won't need to waste 4-10%+ of our entire lives staring at the road doing nothing. That's huge! Once the cars are safe enough, you'll be able to read, write, or take a nap.

Life also gets a lot more efficient. We won't all need cars. Think about the social ramifications. We won't need all that parking space we waste at home, work, at grocery stores, or downtown. Rather than needing 2 cars for me and my wife, I could send it back to get her once I get to work - or maybe just sign up for a service that completely eliminates the need to own.

In this discussion I very rarely see mentioned how heavy are cars. Since when do we need multi tons machine to move a being that is usually below 100kg from a to b? Here in northern China, many people have very small egg-cars, which are nothing more than an electric bike with a thin shell for the cold, and this is much less likely to kill someone by accident.

I think we have been under a very strong brain washing so that we all believe a nice car is big and has this and that. Maybe these car ads will be seen in 20 years the same way we see cigarette ads right now: dangerous brain washing by a lobby gone crazy.

Question for self-driving car aficionados. When I'm driving and encounter a strange or dangerous situation these days I often try to think if a robocar will be able to handle it properly. One situation came up the other day that made me nervous. I was driving on a relatively empty highway at high speed (70mph) and there was a piece of debris in one of the lanes. I spotted the debris far ahead and safely changed lanes.

For an autonomous car, the car will a) need incredibly far range to see this debris in time and b) will need incredibly precise lidar/radar to see the debris if it is small. At a far distance, a small piece of debris covers a minuscule solid angle on the sensor. At high speeds, you have a twofold problem that tests the limits of the onboard sensors and collision avoidance systems: objects approach rapidly, and small objects can cause catastrophic damage or cause other collisions. In the case where other cars are on the road, the problem seems straightforward since the robocar can probably see all the other cars ahead of it changing lanes in a pattern. On a deserted highway however the car is in trouble unless it can spot the debris from a very far distance, it seems. Any thoughts?

It's amazing how new technology is meeting old technology again. For example, the horse drawn carriage.

Typically, the carriage driver doesn't steer the horse but points it in the direction they want to go. The horse and it's little horse brain negotiates the terrain and immediate obstacles.

You never had to steer the horse to avoid driving over the cliff, it was smart enough to know self preservation. The modern mechanical car doesn't even have that level of avoidance systems. This left the driver to manage more strategic tasks. There were downsides obviously. Horses could freak out and run over a crowd of people.

This is amazing, but if anything it's made me more wary of some of the challenges the project faces.

- What if the cyclist fell off his bike in front of the car? How quickly can the computer process the real-time imagery and react compared to a human with their peripheral vision.- What if the cyclist swung from the pavement on to the road. A human driver will probably have spotted the hazard (we all train for that kind of thing when learning to drive) earlier. What are the limitations of the peripheral vision of the car when checking hazards?- What if a fire hydrant bursts on the side of the road 50m in front of the car and makes the road ahead really wet? Can the cameras determine and detect quickly enough the need for different driving (and probably braking) due to a change in surface?

The sad truth of this is that whilst it's an interesting technical challenge, I really can't forsee a situation where a computer could react to all the different things that could happen when driving a car as well as a human.

There are lots of subtleties in in-town driving. For example, the Google car will crawl its nose forward just a bit to indicate when it thinks its turn has come at 3 and 4-way stops. This is a strong signal to the other drivers that the car is about to make the turn, and reduces the frequency of contentions inside the intersection.

It's a nice touch to have the robot replicate behaviour that most people are not even aware they are using.

The above page has photos and a full transcript of the episode, I very highly recommand it.

This piece reminded me of the 'problem' of jaywalking because the complexity the self driving Googe car boast to be able to handle, could also be instead reduced or anhilated by adequate urban policies. There are movements to get the car out of urban areas [1] or at least make the pedestrian the 'owner' of the street, and I think self driving cars should be positionned more as an auto-pilot for long distances, rather than attempt to make sense of the current chaos in high density areas.

What happens when some asshole with some knowledge of the algorithm decides to troll commuters? For example, standing around an intersection, pretending to move forward/backward, forcing cars to just stop there.

I think the coolest thing about self-driving cars is they can create a mesh network with one another. If I'm driving to work and a traffic accident occurs 2 miles ahead on my route, with multiple self-driving cars present at the accident, those cars can share traffic information with all self-driving cars in the area. My car will then plan an alternative route around the accident before I've even seen it.

Reading a lot of comments here, the theme seems to be that the benefits will trickle down to the average consumer. I can see immediate cases where it's just going to go to those with sufficient economic muscle.

Less need for a garage means smaller houses, and more of them (for example).

Improved insurance? Probably the same insurance premiums (inflation adjusted), with a higher premium for more human miles driven.

If there is money on the table, it will get taken by those with the wherewithal to grab it.

In September 2013 they let one of their S-Class models drive the historic route from Mannheim to Pforzheim in Germany with some journalist(s) on board[1].

On this route the worlds first long-distance (194km/121mi) ride in a motorized car was undertaken in 1888 by Bertha Benz, wife of Carl Benz, who developed, what is known today as the worlds first modern automobile.[2]

This route contains interurban as well as urban parts. The prototype car nicknamed "Bertha" is not easily recognizable from a standard S-Class model, which already contains most of the systems necessary to enable autonomous driving.

I saw an interesting comment on the blog post. Is anyone keeping tab on the project knows if Google in working on some sort of communication between vehicles on road. That will make the self driving cars much more accurate. Lot of 'sudden' circumstances will be predictable by the car.

I still think that the best application for computer controlled cars is to have humans driving and computers handling the safety, braking when they detect danger, stuff like that. Lots of accidents like hitting pedestrians, cyclists or other cars could be avoided, and it's not as high level and difficult as what they're currently trying to figure out.

Genuine question, can anybody answer how these cars will:1) Allow for situational aspects of driving eg I need to get there quickly so might drive faster because I'm running late? 2) Protect from people intentionally take advantage of safety mechanisms eg cutting in front, stepping out onto the road?

Given that the NSA will likely have a backdoor into any automated driving system, is this really a technology you'd like to see become all pervasive? "We don't like this guy's ideology - ok, let's arrange for him to have an 'accident'".

Psychiatrist here; glad to see this near the top of HN.Schizophrenia is a serious illness, and often misunderstood as "split personality." It is a constellation of delusions, hallucinations, and scrambled thoughts that is often (though not always) pretty devastating to work, school, relationships, etc. For some reason, because we have no blood test or genetic test for it, the diagnosis is still met with skepticism from many in the public, even though everyone seems to accept the diagnosis of migraine headaches which similarly has no clear-cut lab or imaging findings.

A very interesting submission. If you like long-form videos of scholarly conferences, there is an amazing video of a public presentation by two identical twin sisters who are discordant for schizophrenia.[1] As you can imagine, the sister who didn't have schizophrenia thrived much better in life, and indeed is a psychiatrist.

I am privileged to know Irving Gottesman,[2] one of the world authorities on schizophrenia research (he was the consultant credited by the author of the John Nash biography A Beautiful Mind). He used to be one of the few researchers on the topic who thought that there were genetic influences on schizophrenia, which is now established medical knowledge. In the bad old days of Freudianism, schizophrenia was thought to develop solely from "schizophrenogenic mothers," whose bad parenting caused their children's suffering. It was adoption studies in several countries that conclusively showed that genes matter more than parenting in early childhood in triggering schizophrenia.

And yet environmental factors of various kinds plainly matter too, as the cases of monozygotic ("identical") twins not having identical disease course in schizophrenia make undeniably clear. Gottesman and most other researchers on schizophrenia believe that there are a variety of genetic vulnerabilities that people may or may not have that increase risk for schizophrenia, and then stressors in the environment (and it is not excluded that some of those stressors may be purely psychological, influenced by interactions with other people) trigger the expression of full schizophrenia symptoms. This is called the diathesis-stress model of schizophrenia, and the same model is believed to a helpful research hypothesis for study of depression and suicide.[3] So if you know someone who suffers from schizophrenia, the compassionate thing to do is to help the person find current medical treatment (which has improved enormously over the course of my adult life) and to cope with day-by-day life stresses.

EDIT ADDED AFTER AN HOUR:

Several other comments here involve participants who have close family members with related diseases, or who have related diseases themselves. That's rough. I hope Hacker News is always a compassionate community where you can share your experiences and then encounter empathy and helpful advice. We should support one another here.

This hits me particularly hard because I lost my sister to mental illness not but three months ago.

Perhaps our society will one day treat those with mental illness the same way we treat those with cancer or ALS, with compassion and love, instead of with insults and shame. I hope that I am alive to see such an enlightened society.

My brother has the illness. Every ounce of self-doubt I have, I worry is the beginning manifestation of Schizophrenia in my own mind. It's not a good place to be - that worry that every time you "hear voices", it's some sort of announcement about your own mental state? The worst for me is mishearing people, or muffled conversations, where I fill in the gaps with extremely negative content, causing a downwards spiral in emotion. I'm sure it's nothing, and I'm perfectly normal though.

Schizophrenia is such a hard disease to study in the same way that cancer is a hard disease to study - it not a disease with a common cause (like AIDS for example), but a collection symptoms used for diagnosis purposes. At least with cancer we have recognised that it is not one disease (Cancer is thousands of different diseases with thousands of causes), but with schizophrenia we seem to still be looking for the "cause".

People tend to minimize the lethality of mental health problems. There's an assumption that completed suicide is the only cause of death for mental health problems. But people with MH problems suffer weird sub-optionalities in health care, even in England where we have the National Health Service.

Young people with a mental health problem will especially suffer from poorly funded treatment, often being sent many miles to get treatment.

Off topic: For any Googlers reading this article contained an advert that minimized Google Chrome and opened the App Store. That must be a bug. It is horrible horrible behaviour. Please can you poke the appropriate team and ask them to do something?

Has there been any research about the prevalence of schizophrenia in communities with strong social bonds (multigenerational families under one roof, closeness with neighbors, etc) versus modern urban communities (nuclear families or singlehood, relative disconnectedness with the physical world, lots more disembodied communication (Internet, phone, SMS) as opposed to traditional face-to-face communication)?

I was asking because I definitely notice that if I am in a situation where I am living alone, I am just flat out less happy than when my wife is home. And I'm one of those people who is actually even happier when both my wife and mother-in-law are home, basically, the more people living under my roof, the happier I am. I was wondering whether it was the same for others.

> This parasite messes with the brain, causing rats, for instance, to feel a fierce attraction to its predator, the cat.

This is wrong. The parasite causes rats to lose their natural fear of cat smells, not gain an attraction to cats. The author mis-read the research:

"Infected rodents show a reduction in their innate aversion to cat odors; while uninfected mice and rats will generally avoid areas marked with cat urine or with cat body odor, this avoidance is reduced or eliminated in infected animals. Moreover, some evidence suggests this loss of aversion may be specific to feline odors: when given a choice between two predator odors (cat or mink), infected rodents show a significantly stronger preference to cat odors than do uninfected controls."

An amazing story. And so sad. It could affect any of us. I don't know how to react other than to be thankful it has not struck anyone in my immediate family. Hopefully we will make progress in better understanding schizophrenia and work towards a better way to manage it.

Deranged nutter here, glad to see mental illness being discussed in a mature manner on any website.

Ignorance of mental illness is widespread, but what is becoming more popular is abusing the terms to describe your own problems. When the term "bipolar" starts to be used to describe someone's mere eccentricity, it indicates awareness of the condition has increased, but the level of ignorance has remained the same.

A good example of ignorance related to depression:

Many people who suffer from depression have problems with anger. From my experience being diagnosed as a loony correlates with an increase in aggression among your fellow humans that is mainly directed towards yourself. I don't see how subjecting someone with anger management problems to more anger and aggressive behaviour is going to help them.

A diagnosis of mental illness is a diagnosis, not an answer to or explanation of a problem, which is usually how it is interpreted. Appropriate action will help the problem, as opposed to thinking that a prescription of escitalopram or fluoxetine will just make the problem fizzle away.

One more point: people with mental health problems dislike being talked to and treated like naughty children. That is the quickest way to lose their respect.

I found the 2010 Discover Magazine article, I believe referenced in the American Scholar article (the stuff about Torrey and HERV-W), fascinating. A couple of excepts:

"One, published by Perron in 2008, found HERV-W in the blood of 49 percent of people with schizophrenia, compared with just 4 percent of healthy people."

..."In the past few years, geneticists have pieced together an account of how Perrons retrovirus entered our DNA. Sixty million years ago, a lemur like animalan early ancestor of humans and monkeyscontracted an infection. It may not have made the lemur ill, but the retrovirus spread into the animals testes (or perhaps its ovaries), and once there, it struck the jackpot: It slipped inside one of the rare germ line cells that produce sperm and eggs. When the lemur reproduced, that retrovirus rode into the next generation aboard the lucky sperm and then moved on from generation to generation, nestled in the DNA. Its a rare, random event, says Robert Belshaw, an evolutionary biologist at the University of Oxford in England. Over the last 100 million years, there have been only maybe 50 times when a retrovirus has gotten into our genome and proliferated."

And then our bodies reaction to HERV-W seems to be involved in most schizophrenia. It's odd to think of modern problems like that being down to an ancestor getting a virus millions of generations ago.

>A model 1987 longitudinal study of 269 patients suffering from severe schizophrenia and released from the Vermont State Hospital between 1955 and 1960 found that one-half to two-thirds of them had significantly improved or fully recovered. These were the most hopeless back-ward cases. These patients had benefited from social workers and therapists and employment counselorsan extensive support system continuing over several years and consisting of one essentially unchanging professional team. By the time of the study, a significant number of them had reintegrated into the community.

i always suspected something like this. Looking at the obviously mental ill homeless people, i always feel some guilt that we, as a society, are just leaving them behind by not mustering the necessary support and help.

" Two University of Texas computer scientists programmed a voice-recognizing computer with neural networks and taught this artificial brain simple stories. They then simulated a hyped-up dopaminergic system by reducing its ability to forget or ignore. This unfortunate computer became delusional. It made up wild, disconnected stories, even claiming credit for a terrorist bombing. If computers can go crazy, can they be cured? If computers can be cured, can we be cured?"

I would like to know how they drove the computer "crazy" (and how this maps to human neurology, if at all).

Epigeneticsthe way genes switch on and offis another area of intense interest for schizophrenia researchers. Every nonreproductive cell in our body contains our entire genome, and in every cell, some genes are properly switched on and others off. We inherit our genes, but environment strongly affects the switching mechanisms. This was dramatically demonstrated in a study of persons born during the Dutch Hunger Winter of 19441945a famine the Nazis created in the Netherlands by cutting off food supplies in retaliation for Dutch participation in the resistance. Infants born during the famine to half-starved mothers, a cohort now turning 70, have higher rates of all kinds of pathology, including schizophrenia.

I am super tired today, fall out from my own struggles with getting well in the face of a genetic disorder and doctor pronounced sentence of death. I feel pretty apathetic. I don't know how much is chemical, how much is situational. I get so tired of being treated like a loon by the world.

The disease, malnutrition, epigenetics, inflammation -- all that stuff is stuff I have worked on to deal with my own issues. I flail about, unable to find a way to speak of it. I have mostly moved on to trying to figure out how to make money instead of how to help other people.

I don't quite know what to say. Inflammation is rooted in acidity and promotes infection. Infection promotes malnourishment. Malnourishment does all kinds of wonky things to the brain.

I am not having a great week, physically, and it has a track record of impacting my mental functioning and mood. My only known auditory hallucinations are related to an overdose of decongestants. I have no reason to believe schizophrenia runs in my family but the article hits a nerve in some important ways.

My condolences to everyone who had a family member or friend die of complications caused by schizophrenia or some other mental illness. I myself have schizoaffective disorder since 2001 which is like schizophrenia and bipolar mixed together and less than 1% of the population gets it and it is very rare and misunderstood.

I have lost friends I worked with and went to high school with to mental illnesses and they killed themselves. I've been suicidal myself in my life in the past. But I vowed I would not try suicide ever again and work to improve myself so that one day I can return to work and earning a living.

You have my deepest sympathies, empathy, compassion, and love. I really care about mentally ill people and their families and friends, even ones that passed on.

I don't understand why Microsoft took so long to do this. If they had been faster after Google Hangouts, it never would have gained a beachhead. Xbox One could easily find a secondary market as a conference room device; a Microsoft version of Airplay and Chromecast (ideally open to Mac and general web as well) would take more engineering, but would have been awesome too.

I hope open protocols will catch up already. XMPP/Jingle has user side multiplexing for a while (MUJI), but it requires decent bandwidth. It's a deferred XEP for some reason though. Also, most clients lack support for it. Server side multiplexing wasn't standardized for some reason, so XMPP servers didn't come up with anything common so far.

Skype remains just another proprietary walled garden communication network which in addition can't be trusted. It should be avoided.

I'd written Skype off as malware after it continued to put a browser extension on my computer that parsed every single page I visited without my permission. Even after removing the plugin it would regularly disappear. This behavior is not okay.

I've used many video conferencing solutions but I have the impression that Skype is going to be the most convenient (largely because we're already using Skype as a team). Sqwiggle[0] is pretty neat as well (Edit: I had to mention that, AFAIK Sqwiggle doesn't support group video conferencing).

It used to be free a few years ago too until it became 'premium' in early 2011. [1] Difference is that now Skype features more ads than in 2011, even if you have multiple monthly subscriptions (A no-ad version is now a considerable perk of the premium version).

Funny - because just last week I setup new dummy google plus accounts for various family members in Europe and India so we could use hangouts without polluting our regular gmail accounts with Google+ nonsense.

We won't need that anymore - We were perfectly happy with Skype, but missed group video.

I see another player that Skype is going after - WebEx. Most uses for WebEx are simple video chatting. Most times no one need the dozens of other features that WebEx offers in their bloated product. In true innovators dilemma fashion, I can see Skype now truly eating into WebEx's market from the bottom up.

Nice! Paying a premium for this was a bummer, but it does still work better than all of Skype's competitors, in my opinion. The primary problem of Hangouts and whatever else is out there is audio echo. I don't know if that problem is simply unsolvable in Flash or what, but Skype has handled it much better. All you need is one person on your Hangout call who doesn't mute their mic and the experience is a disaster, unfortunately.

Let's think about this logically. Google takes 32% of every adsense click [1], so assuming an account makes $5,000/month, Google is making $2,352/month from that account. So by banning the account, they are making $5,000 one-time, and losing $2,352/month forever. No company is stupid enough to do that.

However, considering a site making $5,000 or $10,000/month is generating quite a few clicks, I think it makes perfect sense for any account reaching these thresholds to be manually reviewed to ensure they are valid sites. The quality of Google's clicks is one of its main selling points, and by cutting out spammy sites at the source it both improves the quality of its own program and at the same time removes a lot of the financial incentive to run a scummy site.

So my guess is these policies (or similar policies that involve manual reviews of sites) make perfect sense, are not illegal in any way, and this whole posting is as bogus as it looks.

Everything about this post strikes me as a conspiracy-laden fake, from the typos to wrong terminology to untrue policies to the lack of specific names of people. I passed this pastebin to the ads side to confirm for sure, but I would treat this as completely untrue.

Added: Yup, I'm hearing back from multiple people on the ads side that this is pretty much untrue from start to finish.

Also notice that the "rmujica" account that submitted this item has never submitted any other story or written any other comment on Hacker News before today.

It reads like some disgruntled AdSense publishers theory as to why they were banned. Now it is true that in 2009, when the Great Recession hit, Google went through its processes and identified places where controls were lax. And its true that there has always been a lot of abuse of AdSense (it is after all the first thing a neophyte ad-fraud wannabe does, which is create a page, put AdSense ads on it, and then pay a bot-net to click on them. It almost seems like some sort of starter project or tutorial it was so common)

I would be surprised though if anyone actually sought out to 'screw' the legitimate advertisers. It is after all Google's bread and butter.

Funny; back in 2010 this exact thing happened to a company I worked at. The day before payout (for the previous month) our AdSense account was banned. So we lost 2 months worth of ad revenue. They completely ignored all of our emails and we had to move to another ad provider immediately.

If you are going to sit around and "see what happens" for 3 years, you talk to a lawyer. You gather evidence. Emails, text chats, etc. You audio record meetings and conversations with people (subject to lawyer advice). You collect enough information over a long enough period of time so that an investigator can trivially search a dumped archive of email to verify your claims.

But we are supposed to believe someone who offers effectively no evidence from the duration of their claimed tenure, and who pushes it off as "I stayed because I had a family to support, and secondly I wanted to see how far they would go." and identity protection at the level of "such as waiting for the appropriate employee turn around"

So... no Hardy Boys level of investigation was performed, no evidence was gathered, no voices were recorded, no text messages were saved, no emails were forwarded, not a single byte was smuggled out on a flash drive nestled in the poster's pocket. Nothing was done to offer even the slightest bit of recording of anything.

The poster is either the most pathetic excuse for a whistle blower that I've ever heard, or it's a poor-quality April fool's joke that is 28 days too late.

"The new policy was officially called AdSense Quality Control Color Codes (commonly called AQ3C byemployees)."

You know, you'd think that if Google had a 2-years-running official policy, some other bit of leakage about it would have occurred by now. Two years is a long time for an official policy on a giant company's largest product to have never even been whispered, in accident, on the Internet before.

As a large publisher, we have witnessed both the $10,000 and $5,000 thresholds. It's simply true. We are now using other networks and directly working with advertisers, and our AdSense revenue is $4,500 (total ad revenue is $25,000+/mo). We also quite presciently considered making a PR stink after the first AdSense ban (we were re-instated later), but decided this could tarnish the image of our company and of our product for our clients- AdSense revenue was not important enough.

Even though after the initial ban (when we overshot $10,000/mo) we were OK'ed by a contact in their Policy Team and re-instated (a contact we found after a lot of work), EVERY time when we bounced back to $10,000 and then to $5,000 after scaling down, we would have new vague and inane threats from AdSense about our perfectly NORMAL UGC ,as if the initial conversation with their Policy has never taken place.

We basically migrated away from AdSense, but if their are ANY SERIOUS LAWYERS here interested in a class action, we have a WEALTH of DETAILED documentation. ANAL, but it's definitely interesting: we have never encountered such a SHITTY treatment by any other company, and we have about 1,500 corporate clients. Once again, we never did anything shady or different than some other publishers that are apparently Green-listed by Google.

If this is such a slam dunk, why not just go directly to the FBI or IRS? I'm sure there are tons of people in those orgs who would LOVE to smash google if they really were behaving in such an illegal manner.

It seems a bit more realistic than hoping that they see this pastebin text, and decide to follow up on it, track you down, and get your statements on the record.

Making this kind of statements and then not providing any kind of proof is just pointless. No one has any reason to believe any of this and this can't therefore be considered a leak because it carries absolutely no value.

I somehow doubt this is true, but Google has done its fair share to give rise to such rumours.

Mostly they have been very opaque on the reasons of account bans, they haven't payed out the remaining balances of banned accounts (even when they presented no proof of any fraud), and finally they haven't provided a working way to appeal any bans.

I can understand their decisions, but they do come with the risk of bad PR.

I found this part interesting. So the only way to even find out the reason for your account being banned is to hire a lawyer.

> A reason has to be internally attached to the account ban. The problem was that notifying thepublisher for the reason is not a requirement, even if the publisher asks. The exception: The exactreason must be provided if a legal representative contacts Google on behalf of the account holder.

I did enough ad buying from AdWords and dealt with the changeover to "Quality Score" to know that Google cares a lot about raising their ad rates, but not having the perception of doing so.

The same goes with things relating to SEO and since AdSense sits at the intersection between SEO and AdWords, it is not at all surprising that some managers at Google would use the guise of "quality" to juice their numbers.

I don't know if it's true or not, but the story certainly lines up with my experiences with Google over the years unfortunately.

They deal with a lot of spammers and scammers, which is a legitimately difficult job, but they also are a giant megacorp out to make a buck, and that doesn't always mean they do the right thing.

Mostly I bet this gets swept under the rug and isn't investigated, which is a shame.

> Having signed many documents such as NDA's and non-competes, there are many repercussions for me,especially in the form of legal retribution from Google.

> No one on the outside knows it, if they did, the FBI and possibly IRS would immediately launch aninvestigation, because what they are doing is so inherently illegal and they are flying completely underthe radar.

Wait, what legal repercussions? If what they're doing is illegal, (1) they probably wouldn't win the lawsuit against you, and (2) more importantly, NDAs don't apply (IANAL, but AFAIK contracts are overruled by law), and he could and should report the crime directly to the police.

Unless, of course, s/he has too much to loose from bullying, or if s/he fears Google bought politicians.

As a developer, I have had, and friends have had thier Adsense accounts banned right before payout for legitimate earnings. It hurts so bad to have that happen, and Google gives you little recourse. I cannot speak to the legitimacy of this pastebin, but reading it, it sounds completely plausible. If it walks like a duck...

My account was banned for "invalid activity" in the timeframe mentioned. The automated emails said they wouldn't even tell me what I supposedly did wrong. I tried appealing and only got an automated email telling me my appeal was denied. I was never able to talk to anyone or get any actual details on wrongdoing. A quick search and you'll quickly realize this happened to a lot of people.

I had something like $200 sitting in my account, which was obviously forfeited. Before this even happened, I removed ads from my blog (which is where the revenue was earned) because it wasn't performing well enough to justify having ads there anyway.

In the end, I didn't really care so much about my forfeited balance - hell, I even volunteered to forfeit it during the appeal if it was in any way associated with invalid activity among other things. The big issue is that this seems to be a lifetime/universal ban. BEFORE WE EVER RAN ADS, an AdSense account with an unrelated corporate tax ID was also banned for "Invalid Activity". The only reason I can conceivably come up with on the ban is that this was also associated with a Google Apps account that I have.

I'm a longtime Google shareholder and supporter, but it's times like this when you realize you can't trust "Don't Be Evil" any more. Ironically, I've spent way more in Google Apps + AdWords than I ever earned with AdSense.

It is a beautiful peace of rhetoric. Yet I wonder, about its effectiveness/achievement inside Google in last 5 years since its inception.

Perhaps its because, don't be evil translates to, a middle ground always. Its almost like trying to keep your company on 0-loss/0-profit. That is not a great place to be, because you fall from grace easily.

This makes me think of the HN link yesterday about identifying businesses that are doomed to fail because the only way they can sustain themselves is moving into new markets. If Google keeps on banning publishers from hosting AdSense ads in order to keep the AdSense money, that's the same dangerous behavior that's indicative of an unsustainable business. The question becomes can they sustain the growth and movement? Are there enough new publishers for Google to sign up to replace the ones they arbitrarily ban or will they eventually start having a hard time finding publishers and web properties?

The whole click bombing argument is undoubtedly true, I've seen it happen too many times. Additionally, from my experience, Google will wait until immediately before the payout period to ban. I had $750 waiting to be paid out one month then poof! Three days before the payout my account was banned.

I don't know if this guy is telling the truth, but his arguments have truth in them.

Former Adsense PM here. It sounds implausible but there were certainly lots of shady accounts from emerging markets. I could see them ratcheting up enforcement on questionable sites from those markets surely but not at this level of randomness cited here.

I had my AdSense account and site blocked just before i got to the $5000/month mark.

I followed every consideration in the official AdSense blog, ensured that every content policy was met. Appealed with a strong case and got revoked.

No official explanation was given even by Google employees I contacted.

I see tons of sites with shit content and link building monetize with adsense while ours is loved by 1,000,000 users who spend 12 minutes average and a bounce rate below 6%.

Everything on the leak makes total sense to me and we spent so many days implementing stuff to get Google's approval (like image fingerprinting, spam database, porn detection, overseas moderators) to get a shitty robot response with no real explanation.

reocities.com got banned for no reason that I can think of, never bothered to try to engage google on this because it is as good as pointless. Google just simply doesn't care, as long as more people are signing up than they are cancelling they can keep the ship afloat. On the off-chance that you're trying to social engineer them they apparently can't take the risk to re-enable an account.

So, I've taken the attitude that google adsense is to be avoided like the plague for anything approaching a business. It's found money if you can get it, and if you can't that shouldn't be a factor in your business plan at all.

Regardless of whether this is true or not, I'd say it's a good rule of thumb when dealing with a Monolith (Google) to expect better treatment when paying them money (AdWords) than when they pay you money (AdSense).

I have a bunch of projects I built up which I consider "done" as far as I'm concerned, but at the same time they're basically free to run (AppEngine in the free tier) so I'm always reluctant to sell... I do wish there were people who wanted to take one of these and build them out to the "next level".

- http://www.tweepsect.com/ - Gets 300,000~600,000 pageviews per month (about 1/6 uniques), makes a couple hundred bucks from ads each month. Been around for almost 5 years.

- http://colorblendy.com/ - There's also a Chrome app (20k weekly users), website gets about 4,000~8,000 visits/month. I had some ideas for making a "Pro" edition with things like importing/exporting colors from/to CSS stylesheets. Probably need to do some more market research before diving into this.

- http://wedomainsearch.com/ - Few hundred visits per month, brand new (built several months ago). Fairly cool idea that is really valuable for founders/hobbyists, I use it all the time. Needs some love for promotion and monetization, though.

I love working on these little stand-alone projects (a few more here: http://shazow.net/, also my email address if you want to reach out), sometimes I wish I could just whip them out and sell them as a living but the code alone is never as valuable without putting in effort into growing the audience.

http://brainy.io is a project i made that creates a complete API for you by inferring information from your Backbone code. it is a rapid prototyping tool.

the simple idea is that you create your front-end application without an API (using standard Backbone best practices). when you start your application, Brainy can inspect your routes and models and create an API for you.

future potential includes something like websocket support (syncing data), and out-of-the-box server side rendering. i have both of these working but not complete.

i'm not exactly looking to sell the project, but i'd love to appoint a new owner to the application. it has a lot of potential i just simply can't prioritize it right now. my email is in my profile if anyone wants to talk about it.

edit: i'm giving away something i put a lot of time in for free and i'm getting downvoted. why am i not totally surprised.

Menu and takeaway listings (mainly in the UK). Makes around $6,000 per year of Adsense. It's been fairly untouched since I started it in 2006. Has good pagerank for a lot of takeaway names. Costs are a Digital Ocean Droplet @ $5 a month = $60 per year.

It seems like a natural buyer would be someone like a student who wants to kick the tires of the whole running a small business thing, but start with a better baseline than merely "from scratch", mitigating some market risks, as it were, and reusing existing code.

What I like about this is that rather than all the research and development going to waste, it's helping someone else. So if you want to start a new thing, you can try to buy the closest thing that's already there, and stand on the shoulders of the previous generation.

There's an idea for a market place lurking in there somewhere. not so much from "get the most cash for your side project", but from the "find a good use for your old work" angle. I have a couple of projects I want to offload myself, and I care more about them being put to good use than about finding the highest bidder or whatever.

Serves 40 million images a month. It can generate small thumbnails or large screenshots of full-length Web pages. It runs itself and I rarely touch it, but I don't have time to improve and monetize it.

http://the.wubmachine.com, an automatic music remixing web + mobile app that I built a couple years ago. Still gets anywhere from 20-40k uniques/month. Has native iOS and Android clients, as well as the web app. Ads + in-app purchases generate a couple hundred bucks each month with nearly-zero maintenance, but I'm not sure I have the free time to keep it running indefinitely.

- Costs under $50 per month to run

- Generates anywhere from $200-$1200 per month, depending on the season (it gets big each December for some reason)

- 20,000-40,000 uniques each month

- native iOS and Android apps have ~10,000 installs each

If anyone happens to be interested, shoot me an email at hi@petersobot.com.

I was a developer/founder once. I partnered with another developer, and we made a tool that would track the ranking of your site in regards to certain search terms, to a.) figure out whether your SEO guy was worth the money, and b.) figure out which SEO tricks worked best.

We weren't really making any money ,so I sold out my half of the company to a business guy. My former dev partner then went on with the business guy to make it a half-million a year business with about 4 hours a week of maintenance work.

Of course, I truly believe it wouldn't have gone anywhere if we didn't bring in a business dev guy, and I can't see any reason in retrospect why I should have expected their business to do so well once I left, but...

Press release platform for startups. Been around for a year, makes about $1k per month, all organically with zero maintenance. Still tons of potential but I am just way too busy with my main startup now, in a different space.

https://deckepic.com - Cards Against Humanity meets your social graph. With a few clicks, generate a custom tailored deck for you and your friends based on Facebook data. Download a completely free version or pay for a printed version of your deck.

A finalist at the PennApps hackathon in 2014. It gained a lot of attention on social media (Youtube/Twitter/The Blogsphere) during the competition and has been gaining users ever since. Has nearly 300,000 uniques visitors and an average of 2,500-5000 page views daily. Also featured in many online news websites such as Lifehacker (http://lifehacker.com/webnes-plays-your-nintendo-games-in-a-...)

I wish I had time to build it, but I don't. It's a simple idea, where we had moderately good execution and big ideas for the future. Future plans were to expand to Doodle Packs, with themed artwork (also for sale, perhaps).

The image detection code (doodle placement) was pretty hacky but functional. The app has about 10,000 downloads, maybe 1,000 active (close to 0 marketing). I envisioned updating it with new doodles, better image code, sharing options, etc, and re-launching in the next couple months...but realistically that probably won't happen. Maybe you could make it happen? :)

$150k rev last 12 monthsFull project management platform to manage clients & contractors (developers, designers, erc.) with auto-quoting proposals, signing and paying online, and tracking and monitoring the progress of your site as it's being built -> 80% done with the platform & built in Rails.

Was doing this while at Stanford, but school is overwhelming and I can't keep working on it. Really promising opportunity to build the first scalable custom web service company.

Rights to existing clients, brand, full code repository with $100k worth of development hours, portfolio, multiple domains, trademark, and tons of infrastructure to execute rapidly on projects.

It's a 100% anonymous blogging platform with a clean interface. Just start writing and set a password for your post so you can come back later and edit it. Stores literally zero information about the author. Has a stolen-from-svbtle voting system and a reply mechanism.

I believe it has potential because I left it untouched and came back a month later to find dozens of employees of the company OpenEnglish using it as a forum to discuss their issues with their employer and to anonymously coordinate action. They have no fear of ever being "found out" because I couldn't even tell you who wrote what.

Built in a week and I'd give it away for free to anyone who wants to move forward with it. I think it's a really great idea but I'm not sure where to take it.

A mobile maze game inspired by Traffic Jam and/or Unblock me. Instead of moving items with your finger, you move a character (Igor Knots) and he must push items out of his way to clear the maze.

The game is on the Google Play Store, the Amazon App Store, and the Nook App Store. I also have a version that works on my iPhone / iPad, but never got around to uploading it to the Apple App Store. I also have the engine working in a browser.

The game got very little traction; but I never did anything to push it. The game comes with 50 levels, but I have an additional 50 designed.

I always wanted to expand the game to include more challenges and types of obstacles. But, I moved on to other things.

I have two alpha state multiplayer games for sale (c# unity3d), one is Desktop/Browser only, one also has an iOS client. Complete with Client/Server code, tons of assets (arts/sound/music), concepts, videos, Web backend for stats, ingame purchases of virtual currency, item shop, integration with several gaming platforms etc.

Both games have been discontinued when my startup ran out of money in 2012, but the usage numbers were promising.

Chemical.io is a free cloud based lab management system that lets you catalogue chemicals using a smartphone and automatically re-order chemicals when running low.

It's an excellent domain and the software is very polished so it would be a great investment for prospective buyers.

I'm also open to offers on http://www.libramatic.com/. It's a cloud based library cataloging solution that lets you catalog books by scanning the ISBN using a smartphone's camera. It's currently in use in over 1,000 libraries and has a number of paying customers on monthly and yearly subscriptions.

I'm selling PaleoPax.com. I've moved to running MonthlyBoxer.com fulltime, and don't have the time or skills necessary to grow PaleoPax. Right now the Paleo Starter Kit sells for $49 including shipping, and averages $15 profit per sale. Without any marketing effort besides spending ten minutes to get an EveryMove account set up, I'm selling ~30 per month. Product contracts are all solid, and I'd be willing to continue doing fulfillment for PaleoPax for $2 per box. That'd mean a buyer would make $13 per sale and only have to deal with driving traffic.

The app has good reviews and thousands of submitted poems in its database, backed by Parse.

Monetization is through ads and the sale of credits to submit poems beyond the 5 credits you get with the app. I have done 0 marketing so have not been able to see how far monetization can be taken - right now it's quite low.

This project takes screenshots of various URLs once a day. It's been a fun way to watch my side project grow up, and a number of other people receive daily emails. It could be an interesting business, but not really one I'd like to pursue.

Simple battle-tested assigned chart app for companies or venues to sell tickets to venues. At least $1m in sales has gone through it. I'll sell the code or the service for cheap, I don't have time to market it so it's just sitting there.

The coolest feature is that seat updates are "live" (using polling) to everyone, so none of that "you have 15 minutes to complete your order" crap you normally get when buying tickets. It's now a cross-platform HTML5/JS frontend, originally I made it in Flash.

Below are two projects we have that could use a little more love. We may not be looking to sell them per se but would be willing to explore sharing in some of the equity/upside with the right person that wants to run with them.

www.scriblink.com

One of the largest online whiteboards. Built on old technology, needs a refresh but makes some respectable money every day from ads and monthly subscriptions.

www.groupie.co

Available on Web and iPhone app. A niche group messaging and social networking tool.

Greaterskies, http://greaterskies.com - sells posters of the sky, thousands of stars and all the planets as seen from a place and at a time. Selling PDF ($24) and printout. Seems to be getting popular as a wedding present, also for birthdays. Common lisp backend computing the posters, Python and Javascript for the rest. Very nice conversion rate, but I haven't found the time to work on improving the number of visits.

The most popular listing of free bitcoin websites for new bitcoin users. It contains several affiliate links, and generates a bit of revenue each month. Not hard to maintain, but I'm occupied with other things, so wouldn't mind selling for a bit of cash to free up some personal time.

Private Group Classifieds. Facebook groups are increasingly being used for private craigslists, but they are not well-suited to buying/selling. There is no search, no alerts, and are very time consuming to manage.

Built with Rails 4, Postgres, Resque. Got it to a functioning point and then had to switch gears to other clients/projects/work. The plan was to charge individuals for the tools (search, alerts, posting limits) rather than charge for a group, since they would just run to FB then.

Book In Bulgaria - http://www.bookinbulgaria.com - It's a full-blown reservation system, with a hotel reception endpoint (so that availability is always accurate). Hotels can sign up themselves, and it also has fully automatic billing.

We've received really positive feedback from both hotels and tourists, it just turns out that Bulgaria is a really (really) tiny market.

http://getforge.com/ - a superfast static web hosting service. Some clever tech, a bunch of users, very little support and brings in money. It's a nice project, but its success is more important than our owning it. Someone else will be able to do more with it than us :)

I've made a service for festivals/concert venues/labels, where they can promote their events in various streaming services (e.g. Spotify, Rdio, Deezer etc.). The difference between their regular web page and the streaming apps is that the festivals can now link their news and lineup to the actual artists playing. So, if Outkast is playing at a festival they can add a news item for it, and link to Outkast's artist page where the user can read Outkast's bio and listen to their whole back catalogue. Also, the whole line up for the festival is available on one page, so it's easy to listen to all artists playing, and sort by date, stage, etc.

The app is service agnostic, so it can be used with any streaming service. The only thing needed to add a new client is to create the frontend part, and link it to the back end APIs. Also, clients can add content from their own CMS and add it to the Promotor database through our APIs.

Currently there's only desktop support, but I'm working on adding mobile support for Android and iOS.

http://www.moredrunk.com - Lists alcohol from different bars and grocery stores in order of Alcohol Content/Price. Haven't touched it in over a year and the parser needs an update. Not a lot of traffic currently but perhaps has a high potential.

I have a side project that I'm done with: www.movietempest.com. It's been allowed to languish for a while though, so would need a bit work to be functional again. (Also the tech isn't the most modern anymore. Python/django on the back end, with mootools and custom js on the front end.)

Over 5k registered users, 3.8k monthly visits. I haven't had time for it in the past 2 years. I keep it around because it really helps people and that gives me a warm feeling :-). It could surely use someone with a heart for marketing. At least a few monetization options (cash could be gotten from users, caretakers, medical facilities, drug companies).

I built it in 2011 with a friend going to law school, when the economy was particularly bad for law students. But we both have full-time jobs elsewhere and have no time for Firms.ly. We've had 3 paid customers.

It supports custom themes (there is just one for now) and more customization can be build upon it with little code.

I wrote it for a customer who wanted a simple portfolio website (just the pictures and some pages for textual information), so I made this CMS-like thing to edit the page and it could be marketed for more customers etc.

But I don't have any hope it will get a customer, so its interface is still uncomplete (and it is in portuguese), but it works (with a custom domain). What would you do with it?

Proposal: I have the cash and reasonable business/tech experience, but not the bandwidth to run a "full-time side project" ( does that term make sense to anyone other than me? ). Would consider putting up capital to fund purchase and an initial operating budget if a willing and capable partner presents himself or herself to operate the site. I would take a "board of directors" - type role ( periodic advice and oversight, but not involved day to day ). You would be the CEO. If you'd be interested in something like this, shoot me a message and we'll talk.

I created a real-time communication platform for controlling home-automation and connected devices. Allows for control from anywhere with web access, very little set up on the "client" side. I tried to reverse engineer the way Nest Thermostat works (communication and syncing settings wise). I succeeded and it works.

Like Instagram, but for more privacy conscious users. It's ad-free so users have to pay a yearly fee for using the app similar to App.net. Has support for comments, likes, push notifications and live filters.

We couldn't market it properly and also people were not as willing to pay as we had expected so we want to sell it.

Includes the iPhone app, the fully responsive web app and the backend. Code can be reused to make any type of photo/selfie sharing app.

It's an online home inventory software. It was featured on Lifehacker among other sites related to home organization. My wife and I had a baby last year so I never had time to market it properly. A lot of folks want an iPhone/Andoird app but I'm not the right person for that.

Someone good at marketing can probably have this take off. No other competitors are as easy to use as iKeepm.

http://www.curate.im - content curation and discovery platform. It lets users create, share, and discover lists of links on whatever topic they want, to help people organise useful stuff and save people from having to trawl through search engine results for good links about stuff. Log in with the username 'test' and password 'test' to see the user dashboard etc :

I have a site called TwitterAudit (twitteraudit.com) that started as a side project. We just started making a bit of money on it from paid re-audits. The site gets about 40k uniques/month. There are about 150k registered users and about 250k audits.

I own and run ObjectiveSee.com and it receives about 1500 page views a month on average but is targeted and a good domain. Just don't have time to run it any more and someone with the passion and time could run that site. Or use the domain for their own project.

Real-time engine in C, python-based tools. Language and encoding identification and classification technology. Primary market is filtering for kids, currently the only category is pornography. Manual support for violence.

India based government jobs and exams search/listing engine.http://www.findjobexam.com I built it as hobby project few weeks ago but it looks awesome and has potential to become kick-ass product in its niche i.e. Indian students.

I've paused my work on http://stafet.com/ (a marketer/salesperson meet developer/sideproject/startup) site bc. not having enough time, but the people I've talked to about the concept were also really interested in a site that handles this kind of connection.

Note: the site is on a dev-server on PagodaBox, which means it needs to boot up (takes 15-20 seconds) explaining the load time.

I consider selling it as I don't have much time to focus on. The domain is top and the site has a lot of potential once you manage to build a strong community - shouldn't be too complicated but requires advertising efforts.

A lot of people are complaining about not having the time/budget. Checkout getstarted2014, you can win about 50k development, marketing, and rackspace hosting for your web/mobile app idea. - http://getstarted2014.co.uk/

It's a nonogram puzzle game and it made a fair amount of money on portal sites like Bigfishgames and Real Arcade. Since the market has changed a lot and it is not promoted any longer, it now only makes a couple of hundred dollars a month.

Initially we intended to release a sequel and even created great new levels. Since our other software business is quite time consuming, we haven't worked on the game since 2008.

http://stayontop.co/ - I built this desktop app for people spending most of their time on desktop. The idea is to have them remain connected with social, news, weather and email without unlocking the mobile phone every few mins to check for updates.

A very basic android game (no ads or purchases inside it) with about 400 downloads per day, 15k current installs. I know there's not much monetization potential but it could drive steady traffic somewhere. You could totally redo it and get some automatic ranking in the store. I would use the money to pay for college which is coming up in about 18 months.

got a project in beta (http://www.calltracking.at/ - currently german only, will soon launch the english version at http://www.calltracking.net) which will give you the chance to track phonecalls in your web analytics , depending on the traffic source the visitor is coming from. I see a huge opportunity, especially in the B2B segement and in explanation-intensive products and services. beta testers / investors / buyers welcome.

http://longr.co (a longer tweet web app) brand new. it supports markdown, the post i wrote couple of days back. http://longr.co/1h34Wk I am primarily using it now as my blog. but sole problem here is i love it so much, that instead of selling it (unless you are gates cough) i am more inclined to use it as my blog :)

btw should i open source it in the event i can not sell it? would it be worth to open source?

I have a finished content sharing site which I had to shut down to focus on my other business. I still believe it's a great idea. Basically people submit content (pictures, videos, music. etc) related to six weekly topics. As you submit the topics you start filling your wedges in a game fashion.

I made http://www.pastemehere.com (a screenshot sharing utility) but never got time to promote and make money out of it, still running on my server.Though its not much but still would be happy to get anything for it.It doesnt store images on my server but on imgur anonymously so saving you disk space on server ;

I would like to work on a side project with the owner of the side project to make it better or larger in terms of performance and features-set. If anyone interested in collaborating on the pre-existing side project let me know :)

Also if you want to sell your assets of your side projects(if it reached the start-up state) you can use the http://shutdown.io/ (which was posted before few weeks on HN)

http://www.mylovecal.com - a site I've made with a friend back in 2008. It requires absolutely no maintenance and earns a few hundred bucks a month. Tried selling it, but the best offer we got was somewhere around 10k $, which is way below our expectations. Now actually working on renovating it and probably adding some new content and features.

Website and android application. It's the best way to listen to indian music videos without getting lost in youtube.

Launched about 1.5 years ago and has been growing steadily since. It's fully automated and new songs get automatically added so I don't have to spend a lot of time on it. I do keep updating the app every now and then.

A suite of .NET WinForms controls for Windows UI developers. I no longer have the time to keep developing it and the sales are not enough to support a developer full time. Maybe someone else can take it further or make use of the software.

Has paying users, and demand for features. But because of time constraints, I am not able to give enough time for this project. So anyone really interested in taking this further, feel free to drop me an email at: virendra.rajput567[at]gmail.com

Ghost (https://ghost.org/) will be the next big blogging platform and GTheme.io is the first Ghost Theme marketplace that focus on Ghost only. Ghost App marketplace is ready and waiting official Ghost App available.

I am actively working on a "Reddit"-like community. Open source, asp.net mvc (c#,sql). Been working on it for a few weeks now. Would love to get some more devs to join and contribute. At first this was a hobby thing that started with windows forms but after finding out about asp.net mvc and entity framework, this turned into at least 3 hours a day thing :)

http://www.hdrtool.com/ - you can view HDR images (High Dynamic Range - brightness is not clamped into 0-255, but can be any Float32 number) in HDR format and export them into some "clamped" format (png, jpg, webp).

I'm selling RearView (www.rearviewapp.com), and android app which is similar to the Frontback app. It's been online for about 5 months and there are 11.000 registered users and 2800 uploaded photos. I haven't added any sort of monetization strategy (no free/premium version and also no ads). The only cost that it has is a fixed $20 per month of hosting on DigitalOcean.

http://www.honeydo.es - A social todo list. I use it everyday personally, but simply don't have the bandwith to get it to the point I believe it needs to be at and work full time / create and maintain my open source projects.

https://ceti.us - a crypto currency market analytics site I've been working on slowly for about 6 months. The analytics are inspired by physical modeling. I'm a full time aerospace engineering and astrophysics student that will be starting a PhD next year. The website could be monetized but I haven't the slightest clue about the best way to do that, nor do I have the time to figure it out. I literally only make improvements to the site out of necessity when my BTC growth plateaus...

The site was created with PHP, Moo.com API, the iTunes API, and the Google link shortener.

I needed this myself and never really marketed the site. I found it useful to create promo code give aways when attending trade shows for some of the other iOS apps I developed (the iJuror iPad app for attorneys is the example shown on the site).

* Established since 2007, lots of SEO potential and very poorly monetized.

It's not exactly a side project, it has been our main business for the last 5 years, but we've got bored. We still believe it has a huge potential though so we're looking for someone who can take it to the next level.

Not currently live... No revenue... launched it on April 1, 2010 when news about ChatRoulette was all over the place. It went pretty viral and enjoyed a nice long tail of traffic. Took it down recently to switch hosts, and never finished...

I also have the following domains:keyboard.cat, instakitty.com, instapuppy.com, kittyleaks.com, okaykitty.com, emojikit.com, bananaphone.me, srsbiz.co

Would like to sell www.recovermywebsite.com, free service which recovers pages from Yahoo/Bings cache. 1 of May I am starting a startup, and won't have time to continue the service. It should be possible to monetize it. It is built on Asp.net MVC with an NHibernate backend.

This product shows you concise Email Exchanges in a way they occurred across multiple email silos.This view is not sorted by date, but arranged in a hierarchy based on how they occurred.We are actively developing and using this internally, but need a lot of help getting the first bunch of users and building a business around this.

I got a 1954 chevy truck. Older restoration. Supposed it ave a corvette engine(327?). You will need to tow it away. It's a first series. If you tow it away I'll take 5 grand. You will not find a better truck to restore--at this price. And the best part about this project, there's no computer.

And yes--for the right price I will sell physibles.com. It does nothing now, but I think the name has potential. I want way more money than Thepiedpiper gt though.

The heart of the site is the Meme and GIF generators, best on the net if you ask me ;). Pro subscriptions are the biggest source of revenue. I'm not sure I would say "want to sell", but I spend most of my time on a much larger project/team, so I don't give it the attention it deserves. For that reason, I would be glad to see someone (or a team) who wanted to build it out much more.

It's kind of interesting to take an approach like this and contrast it with the user experience of something like Vim or Emacs. These editors are extremely popular despite not making it easy for first time users.

Is this because the target demographic of Vim/Emacs are all power users? It seems like making powerful tools like these are at odds with treating the user like they're drunk.

Not an eye-opener, but very professional presentation. I've heard this ideas before and even talked about them myself, but I still enjoyed watching to it, and I'll definitely use the metaphor afterwards.

Especially enjoy the "drunk but not dumb" part. Minimal design and big buttons do not make great UI. It's all about user intuition and guided experiences - recognizing what a user is wanting to do and helping them achieve it.

So in a moment of Sunday night serendipity, I liked the video enough to follow the link to Will's consultancy site and then his personal site[1], where I was intrigued enough with the interface to click through a few items. Seeing the name 'Bankai' took me back at least a couple of years when I spent plenty of time on the then new thesixtyone.com interface admiring their interface and gamification of new music discovery, where I had 'hearted' a few of Bankai's tracks. Will appears to be prolifically creative and dedicated to sharing, inspiring qualities.

Sensitivity to the user's emotions, and thus their capacity to feel insulted by such things as simplicity, explanation and repetition, is not an excuse for failing to address the need for simplicity, explanation and repetition, but advice to interface designers on the need to be both relentlessly explicit and yet at the same time not insultingly condescending is both familiar and unhelpful.

His examples are clear elsewhere in this otherwise thought-provoking talk, but in the case of reconciling treating the user as 'being drunk' with treating them as 'not being dumb', I think he leaves the door open to interface designers not still subjecting their work to sufficiently rigorous scrutiny: what does a dumb drunk user do that one who was drunk but not dumb would not do? How drunk do you need to be be to act dumb even if you are smart when you're sober?

Great video. I've seen more and more websites with the huge button that "does what it is supposed to do", but in many cases the companies are leaving out a description of what their product actually does.

If you wanted to spend $500,000 to encourage the emergence of bitcoin in a "market" like MIT, I think it would be more productive to use the USD like a traditional subsidy: Pay MIT merchants to offer discounts on BTC-based sales. If I'm spending $40/week at LaVerde's, a 10% discount might be enough to get me to convert a few hundred dollars of my own money into BTC.

As it is, the project will have to deal with getting merchants set up to accept BTC anyway, and my bet is that most students will be content to sit on their BTC hoping for another price spike rather than go through the trouble of learning how to use it for everyday transactions.

Perhaps I'm wrong but wouldn't it make more sense to actually just use the open source Bitcoin software to create their own altcoin......perhaps call them Mitcoins? They could give the initial, easily mined Mitcoins away to students, and spend the $500K on promoting it as a viable altcoin and getting the major exchanges to add it to their trading mix. With the credence lent to it by MIT, it would stand a far better chance of success than other altcoins, and the students that keep their initial coins would likely wind up with far more than $100.

This just kind of seems like they are squandering the $500K, and are completely ignoring their greatest asset: their association with MIT. That name carries alot of weight with the type of people that buy these currencies.

Two MIT students have raised half a million dollars for a project to distribute $100 in bitcoin to every undergraduate student at MIT this fall.... The bulk of funding for the project is being provided by MIT alumni with significant additional support from within the Bitcoin community. The total of over $500,000 already pledged will cover the distribution of bitcoin to all 4,528 undergraduates

I guess I don't understand the "Why?". MIT kids are among the most elite already - why would people donate $100 to each student for that student to spend as he/she wishes? It's an honest question. As a company owner, why would you do this? What would be the benefits you would expect to see? I am clearly missing the "Why?" here. How would you feel if, after this first year, you found that 50% of the kids never did anything with the BTC? Or if 50% of the kids gave their BTC away to another student for nothing?

An interesting thing to see at the end was how many students ever logged in to set up an account, or even check their balance. The funding covers less than $50,000 for anything else - $452,000+ will go directly to the students. That leaves less than $50,000 to cover the administration, the distribution, and the education (a big part). If they get the education part wrong, it will have wasted a huge chunk of the $452,000.

I once read some account or theory of akkadian proto-currency. Market officials would issue clay coins stamped with a pictogram in exchange for goods like cattle or wine. These coins could be exchanged in the market and then would be be cashed in for the goods at the exchange. At that point the coin would be smashed. I guess more durable goods like wine could be stored long term in the exchange and the coins could function as a medium of exchange.

You could do something similar with bitcoins that might be fun on a campus. A trusted bank-like-thing issues clay coins that can be smashed to retrieve the private keys. It might be nice to be able to go from digital back to clay from time to time. It would be interesting to see if they trade at current bitcoin prices, how many get smashed, etc. It also adds a layer of anonymity that can be seen and understood by non tech savvy folks.

I like the premise, but it's pretty evident that MIT will then have the highest concentration of naive Bitcoin users on the planet. Not sure what sort of folks might notice this and focus nefarious efforts there...

I've read the technical docs for bitcoin, but something I still don't comprehend. With a distributed network like Bitcoin, how does the initial network get started? How do the first 2 or 4 or 12 nodes of a network find one another?

Both the announcement and the comments here focus on the amount in USD. When will (the majority of) that half-million USD be converted to BTC? (Via exchange? Via swap with an individual or entity holding a bunch of coins? Nice way to hedge liquidity if you've got the BTC.) What does it say that it isn't already done, and that it wasn't "each MIT student will receive 0.2 BTC" (or whatever a recent exchange would be for $100)?

There is probably more information available without having to contact those organizing it, and the article was written for media press-release ingestion, but I'm surprised to see no comments on these facets here (at the time of writing, and HN seems to be acting up a touch so I've been trying to submit for a bit after I wrote this).

When I first read the title I thought that MIT was going to set up a large mining project, partially as a research piece, and partially to counter balance the very large mining pools that are risking the 51% attack. Which I was very excited about.

> "Two MIT students have raised half a million dollars for a project to distribute $100 in bitcoin to every undergraduate student at MIT this fall."

> "The organizers admit they do not know how students will decide to use their bitcoin. However, they plan to use the time between now and when the bitcoin is distributed to build up the Bitcoin ecosystem at MIT."

I've been waiting to hear about the next phase of bitcoin development; beyond exchanges and marketplaces. What would be the easiest way to keep tabs on the bitcoin projects at MIT? Is there a publicly accessible message board for this project?

Edit:

Reorganized this post a bit. Didn't mean to side track so much from the content of the article.

Side note:

Please stop hijacking the native browser scrolling. I don't know if it's because I ate gul for lunch, but the custom scroll effect on this site makes me feel nauseous. Additionally, the site doesn't work at all without JavaScript. Why? The site could probably very easily be built to static HTML files for faster load times, decreased load on the server(s), and wouldn't require JavaScript to render views.

- 100% secure and hack proof - impossible to lose bitcoins even if everything on your store is hacked. It utilizes Electrum Master Public Key logic that generates receive-only addresses without need of any private keys.

- 100% free (wordpress + WooCommerce + bitcoin plugin - all free)

With more people getting access to bitcoins - it's the right time to offer it as a payment method as well.

As someone who is a member of both of this site's target audiences, I'd be curious to actually see some live samples of products that were developed on a $3,500 budget. I think that would help give "idea people" a realistic expectation for what the scope of their project should be in it's first iteration, and it would set some expectation for the developers as to what they should be able to produce on a $3,500 budget.

Another thing I'd like to see Betatype providing is some sort of franchised coaching and guidance to both the product owner and the developer on how to best set themselves and the other party up for success.

Neat idea. Initial thought is that I have trouble seeing this be profitable, given how little Betatype is taking. Betatype itself is only getting $140 / project and for this is promising to "help clients flesh out their idea into a clear list of requirements." That alone could take many hours of back and forth.

I look at it like this: There are 4 directions that a software project can be constrained: Time, price, scope, and quality.

1. Elance or ODesk can offer you fixed time, price, and scope, but highly erratic quality.

2. An expensive contracting agency can offer you fixed time, scope, and quality, but with high costs.

3. This site promises fixed cost, and quality, but makes no promises on scope or completion timeline.

All that's left is for someone to make a site that has fixed timing, like "Software Prototypes in One Week". I'll wait for that one to hit the front page of HN. As a developer I'd actually prefer that because I know when it's going to be over.

It's nice that the dollar figures are all upfront on this page, but it seems a bit strange to say "Get a working, launchable prototype for $3,500" when the client actually pays $3570. It seems that you're putting half of your fee on the client side and the other half on the programmer side, but in doing so the client's seeing a mixed message that could be off putting.

I just can't imagine any non brochure-ware prototype that can be built for $3500 (let's leave off the problems of the middleman not making much money on this one /and/ promising to do project management).

two problems

1) limited hours - you're talking about somewhere around 20-30 hours of senior dev time or 40-70 hours of a newer devs time. Let's say they can both accomplish the same amount during that timeframe. It's still /not that much/ especially if you are taking any design into consideration beyond popping bootstrap on it.

2) zero iterations. (and this is perhaps a bigger problem, since the first can be solved simply be changing it to a prototype for $10k or whatever) I've never seen even an MVP come out fully baked without a lot of iterations at the beginning. Arguably prototyping is the process of rapid iteration. One and done just doesn't work for what a prototype needs.

Sometimes a blank canvas is not a good thing. Twitter invented something entirely defined by its constraints.

The process of negotiating, defining and developing an "prototype" between a client and a freelance developer is hard, for both sides. Often, neither are skilled at managing the process.

Once you limit some aspect of the project, you can adjust the other aspects around the limitation and everything becomes simpler. It's a lot easier to scope a project when the scope needs to fit in a $3500 budget. Simplifying negotiations and decision making could be a big win. Choosing a developer by looking at a portfolio is a lot easier when the scope budgets are the same.

I wonder if this idea could usefully be applied to something outside of software. How about a $3500 (or whatever) custom kitchen.

Sort of offtopic, but has any non technical person launched a startup / business without a technical cofounder?

The company I work for gets people asking for quotes to build their "next big idea", where the person isn't technical but has industry experience behind them.. this has turned out pretty well for some people.

On the bright side, this project acknowledges the killer issue with these kinds of marketplaces: scalably managing quality, scope, and disputes.

But on the other hand, I see nothing but assurances and a woefully low cut that I can't imagine could keep this train chugging without shoveling time (=money) into the furnace. At ~$140 revenue a pop, if even 4% of projects go off the rails, the "insurance" policy already puts the company in the red -- forget profit.

There is a huge problem to solve here, for sure. I want someone to solve it. But I'm not seeing a solution.

This is a cool idea. I've been experimenting with a similar model for design. Undesigned[1] will redesign or prototype a single page in one week for $1500.

In your case, the greatest struggle is going to be supplying the marketplace. It makes sense to offer the service at unsustainable margins for now while you seed the market, so I wouldn't worry too much about all the comments talking about how unsustainable it is, and instead focus on the ones talking about the difficulty of building a marketplace.

I see a lot of comments here about the quality of work that can be delivered on a $3500 budget. While this may be considered a shoestring budget in developed countries, it is completely reasonable in developing countries such as South Africa. Here, $3500 equates to roughly R37'000, which would be considered a decent monthly salary for a mid-level to senior software developer (depending on location, of course). So from this perspective, a South African developer would easily be able to spend a month developing such a prototype, which is almost enough time to develop a functional system for a simple-enough use-case. I've personally developed a few pieces of software around this price-point: software that is actively in use now and not just a prototype, and whose code I don't consider completely terrible either.

I like the idea of it, was thinking of something myself not too long ago.

It's great because it provides big value, and your (i.e. betatype founder, contractor specialising on prototypes, etc) job is to highlight the value because it's not obvious for many. Doing that seems like a fun experience.

Tell the "idea people" that you will sell some seemingly incomplete software which might need to be thrown away for $3500 and they will shrug their shoulders or laugh.

Tell the same people that for $3500 you will help them to find out the potential of their idea for a small price, and they, in the ideal world, should love it. Many of us have worked on some project, which was sort of stealth project and founders spent loads of cash just to find out that no one needs it. All of it could have been avoided for $3500.

> As part of our screening, we'll help clients flesh out their idea into a clear list of requirements.

At market rate for an experienced contract product manager or developer with product management chops, the process of defining functional requirements for a new product will usually entail more than $3,500 worth of work.

Really cool idea, but I think a flat-cost of $3,500 for a prototype is way too low and the approach is error prone. How about iteration? Iteration is one of the key factors during the prototyping process. Does that $3,500 figure cover only one iteration or the initial development and consequent iterations? Or does a company/individual have to pay another $3,500 per each revision?

To all the people saying this price is so too low- don't you think that they are doing this to attract a bunch of people to solve the chicken-egg network effect problem first? Then I'm sure they will raise their cut.

This is one of the most interesting things I've seen as a Show HN in ages... Awesome! I can't wait to see how this plays out, and how scared people are to tell each other about what they're building with it, while it's begin built (e.g. if listings for ideas are public to developers?)

$3500 is one week without talking too much to the client; not sure how to get something usable from that? If your idea is clear then sure, but (hopefully) most aren't and then you need a few days talking/sketching and starting up your favorite edi... oops $3500 gone.

4% doesn't seem sustainable if they're taking credit card payments. The best they can probably get to is around 1.2% transaction fee (if they've shopped around) and this is unlikely given the risk profile of a marketplace like this. I bet they are paying 2.5%+. So 1.5% ($52.50) profit per project?

I have to wonder how they came up with 4% (2% per side). Surely 10-20% is still a reasonable fee to connect these parties and manage the relationship.

Heck, both I and the purchaser pays 10% to an auctioneer to sell my stuff in < 60 seconds (yes, I know they do a little more than thatbut mostly it's about supplying the marketplace).

I liked the idea, atleast Betatype will have less noise and better quality of developers/hackers. I wonders how will the quality be maintained though. Also there is thin line between what are beta products ? will you prescreen the projects posted ? also there are good chances that buyers will misuse the service due it being flat cost in nature. Expect people asking for N number of features and changing deliverable as well as unsatisfied buyers.

Interesting idea but so far poor start. We need more information, less errors. I would like a video on main page explaining what you are doing here and why anyone should use you. Outline strengths of your idea and give much more details please.The next few hours will be probably a turning point if this idea will work for you or not. So be quick and fix all the issues!

Domain was registered today and we can assume the idea was hatched today if not fairly recently.

There is nothing here other than a single page and a way to signup for either a programmer or person with an idea.

This isn't a business it's an idea. Wrapping it in a somewhat acceptably designed website and some marketing speak and seeing HN start to nibble as if it's some real thing actually happening is always interesting.

When is the line crossed between "Show HN" and free advertising for your idea?

No SSL? Not even for sign up and sign in? How is my info (user, password, e-mail, name, banking info) going to be "protected"?

Has people given up on SSL (and any security for that matter) already because of Heartbleed? I guess I can hear some people talking: "We don't need SSL! Not having it is pretty much the same, but with less overhead!"

Seriously, without SSL, you are Beta at best. And if you have been operating like this for a while, well that's terrible... Your users should be very concerned.

Love it taking something thats seemingly a complex process and turning it into a product. Only thing is, they should be charging a lot more than 4%. Perhaps the 500 USD or a similar fee on top. If you provide value, charge for that value.

Assuming this is 2-3 weeks of work, I hope there are a lot of strict controls in place concerning requirements to stave off the "wait, this should be simple" chronic under-estimators, but I've been in plenty of spots where quickly getting a project like this would have meant a lot. Hope it works out for this site.

The term prototype seem ambiguous enough that will cause problems. How far does it have to done? How much bugs can it have? Cross-browser compatible? The problem with fixed pricing is that its not scalable.

In the future, you should stop using freelancer sites, because the customers on them are substantially worse than you'd get otherwise. You will also want to note that not getting paid on time is actually quite common in freelancing. This is one of the reasons why, in the future, you will charge a lot more than you currently do, because you need to essentially self-insure against nonpayment in a way that W-2 employees mostly do not.

More immediately useful to you: you currently have a receivable against freelancer. That receivable has value and can be sold or borrowed against. The terms you get for it would typically not be that great, because they have to factor in both risk of non-collection and the costs of doing business in comparatively small dollar amounts. Still, that's likely the easiest option to make cash appear on Monday, unless you have consumer credit you didn't mention. (Get it if you don't, after passing the immediate issue. Cash flow issues happen frequently and consumer credit is often the cheapest remedy for solo freelancers.)

I feel for the guy, and freelancer.com seem to have terrible, appalling customer service. But this brings up another point, which is that freelancing often isn't a good way to get money quickly. You can't live paycheck to paycheck freelancing.

Your clients will pay you late. They'll pay you the wrong amount. Your freelancing agency will delay the payment. You'll get a check, but it'll bounce. It would not be unusual when freelancing to start receiving your money for a gig two months after you started it (30 days for your client to pay your agency, 30 days for your agency to pay you).

Freelancing and needing money fast are, in my experience, mutually exclusive. One of the (many) reasons a freelancer charges a comparatively higher day rate than his or her salaried counterpart is there's risk involved with freelancing. If a company fails to pay their full-time, salaried employees at the end of the month it's a big deal. If a company fails to pay their freelancers on time...that's not out of the ordinary.

Freelancer/oDesk are terrible middlemen that don't achieve what they promise to. Businesses tend to go there looking for cheap labor rather than value. It's the white collar version of a boss driving up to the Home Depot parking lot, except with more bureaucracy.

Another thing that freelancers should learn is that the business is high risk, and if you're one late payment from doom, then you have to go get a job, go on welfare, sell plasma, sell hair, or whatever until you can afford the business risk again. It isn't a safe or easy thing and the people who tout the 'gig economy' as if it is should be fed to zoo animals because it leads people like this down ye olde primrose path to penury.

If your wife has some kind of weird medical issue and you're being evicted, you are loaded up with too much risk to freelance. Stupid things like this happen all-the-time and it's not unique to this website. It happens all the time when you sell directly to clients. You either sue to collect, demand contracts for larger numbers that you can sue for, or eat the loss.

There is this concept of balancing risk and reward that is painful to learn because risk isn't easily perceptible or measurable ahead of time.

Also we should beat people over the head with Poor Richard's Almanack repeatedly. Expecting people to read it is probably a little much, but some lessons may be transmitted through bludgeoning according to the experts.

1. Any time you're in business for yourself and you don't have a buffer, you're basically out of business. (I keep re-learning this one.)

2. Using a site like this turns you into a commodity. It's structured to be a race to the bottom. There is no substitute for finding your own clients and building relationships with them. That's hard to do and takes time and energy. See #1.

- Obviously the Freelancer.com customer support team is giving him the runaround.

- It's hard to see anybody in trouble and not feel bad for them.

I do wonder though, why doesn't he plan better? Having found himself in a tough situation a couple of years ago:

- Why didn't he build some savings before becoming a freelancer? Doesn't he understand that is a really bad idea?

- Why didn't he test the waters of Freelancer.com with a much smaller project? Any time you have some middleman between you and your money it seems like you want to know that they are not going to jerk you around.

At this point, it seems like he should go to one of those "work for cash" places and do some quick manual labor to get at least some of his bills paid off. And then learn how to avoid this sort of situation in the future...

This happens all the time, unfortunately. Google Groups bans me because I open too many tabs at once and they think I'm a robot, there's no one to talk to about this. eLance refuses to close a job where a contractor has done no work at all, but keeps billing me. I call their support numbers and just get told the department isn't open no matter when I call. I used AirBnb for years then one host refused to let me come to the address, said they would pick me up, never picked me up, and happily charged me. I finally get her to agree to cancel the stay and she tells AirBnb to refund but they never do.

All these companies just have atrocious customer support because they don't want to pay for customer support. This guy's problems are probably due to support being outsourced to some foreign country not familiar with the fact that most people in the US have no passport or national ID other than a driver's license.

This was reposted on /r/programming so I have copied here my reply to that thread:

---

Hello everyone.

This is Matt Barrie, Chief Executive of Freelancer.com.

I see this has been reposted again in /r/programming. I think that we are being treated a little harshly here when the facts behind this case are not fully known, and nor are we able to provide them to you due to privacy issues.

Dustin did successfully withdraw funds the first time only a few days before. This has been omitted from his article.

On Thursday 24/4 he queued a substantially larger withdrawal for processing which was queued according to our normal timetable for Monday 28/4. This larger withdrawal triggered our KYC process (Know Your Customer) which required provision of identification. Documents were provided on this day but did not pass our process. I personally looked into this and it was for good reason that the anti-fraud team rejected them.

On Sunday 27/4, further documents were provided (still not completing the KYC process), but at the same time Dustin posted his article on Medium.com which blew up both here on Reddit, and HN.

By this stage it was early Monday morning here in Australia, and this issue was brought to our attention (thank you to those of you that emailed me). I personally looked at the accounts and was present when the team called (at about 7pm NY time) both Dustin (who didn't answer) and the employer to resolve the matter. The employer told our team "he does not wish to cooperate or assist us in any request" along with several expletives. We informed the employer on that call that in that case we were likely to refund the payments back to his credit card and that he would have to pay direct.

Our team then investigated the project itself- and became concerned enough with the nature of the project (which was against our Terms of Service) to cancel it, refund all the fees- for both Dustin and the employer- and refund all payments back to the employer's credit card.

I hope that you all understand that we have robust anti-fraud procedures in place designed to protect you in the event that your credit card is stolen. We don't subvert those processes because a post makes its way to Reddit or HN.

Over 5.8 million projects have been posted on Freelancer.com. We are a public company listed on the Australian Securities Exchange. We are not in the business of "actively delaying payments to contractors". This is a ridiculous assertion from OP.

NO payment was delayed in the above process with Dustin. We process withdrawals for Monday EDT. We were attempting to resolve this issue Sunday evening EDT.

Both Dustin and his employer were, to say the least, unhelpful in getting this resolved.

I am the Chief Executive Officer of Freelancer.com and I have personally investigated this situation.

While I sympathise with Dustin's situation, he has failed to complete our Know Your Customer (KYC) process, which involves the provision of bona fide photo identification. We take the security of our marketplace and the protection of our users very seriously and have robust checks and balances in our anti-fraud procedures.

I have looked at all the details for this case, and our support team have done exactly the right thing in this instance.

We have decided to refund all funds associated with the project back to the employer's credit card (who is also located nearby in New York) as well as all fees associated with this. The employer has been called and informed that he will need to pay Dustin directly.

We will also be investigating the nature of this project further.

Thank you also to those of you that took the time to email me to bring this personally to my attention. My email address is matt@freelancer.com and I am happy to receive emails about any issue, even if it is to just drop me a note saying hello.

What I wouldn't give to have a freelance site that paid instantly. oDesk takes 5 business days to release funds (plus another day for the bank deposit to go through), so I often find myself paying rounds of bills, all with late fees (my gym's is $20!). Due to various financial struggles in the 2000s, I have not used a credit card since early 2008. So even though the amounts of money I'm dealing with are small, not having them can be expensive.

This country comes down hard on people who choose self employment. But there is an honesty to humble living that Im not ready to give up. I remember back to the vacuous state I was in while working for other people, fighting every fiber of my being to simply get up in the morning and face people that I vehemently disagreed with. Its like racing with the pedal to the metal but the transmission is stuck in first gear. To me, underemployment is a wedge between the life I have and the one I wish I could lead. Even though its hard at times, liberating myself from other peoples expectations was one of the best decisions I ever made.

So why backpack around Europe when you can experience everything life has to offer right here in the US? Become a freelancer today!

Update #2: The CEO has called me and notified me that the first wire was held up at an intermediary bank in California, and has been resent. They have also refunded the $ 1,000.00 project fee back to the client so that he can pay that to me as well.

After numerous emails and support tickets and live chat conversations that didn't provide any answers, Matt Barrie finally gave me an explanation at 10:30 pm when he called me.

The explanation given for why my driver's license was rejected was because their manual states that potential counterfeit IDs would have a brown background in the photo section of the driver's license instead of a white background. When I renewed my license a few months ago the DMV gave me the option of using my recent license photo or take a new one. I opted to use the old photo. It's a black and white photo as far as I can tell, but according to Matt Barrie the background of my driver's license photo appeared to be "brown." I think it's a rather ridiculous reason for why they rejected my ID, it clearly does not look like a brown background photo and is in fact my valid New York ID issued from the state. Regardless, they should have told me this immediately instead of giving me the runaround for days and holding up my payment. They also should have done this verification process before I began working on the project and before I waited for 2 weeks for the first wire. They also never requested that I do a keycode verification where I hold a sign using the unique keycode they give me and holding up my ID. I am displeased with their handling of this, but thankful that my client is dependable and professional and has opted to continue this project off of freelancer.com.

I regret that these are the actions I had to take in order to get freelancer.com to clarify why they were holding my payment and rejecting my ID. I am lucky enough to be working with an understanding and motivated client who has gone out of his way to help clear up this situation. He was gracious enough to advance me a payment this evening even though he is still awaiting his returned funds from freelancer.com

In the future, I hope that freelancer.com will be more direct with their customers when issues like this arise. I also hope that everyone who is having a similar issue gets it resolved ASAP without it having to come to this ugliness.

My wife and I cannot thank all of you enough for your support. When companies won't do the right thing, it's good to know that the power of social media can help keep them accountable.

It sucks this guy has to go through all this. I used freelancer for a few years but it consistently got worse and worse. Eventually I had to move away from it. Customer service was one of the big problems and from reading the responses in that post it's got even worse. I'd highly recommend not using them. I use Elance an although I've had minor issues with them they've always got resolved. They still aren't great but they are 100x better than freelancer.com.

Thats why you shouldnt do any larger jobs over these sites. Do smaller scale projects over them to build your profile and reviews, and each larger project should go directly through wire or any other kind of payment processor, without freelance sites being middleman.

Just a tip, but maybe you should try to put public pressure on them. I had a case once when I got scammed for $1200 on Elance by a guy doing chargeback frauds, and Elance locked my account because I didnt had enough funds loaded to cover up the scam cost (they wanted to minus my balance by $1200 to cover chargeback, which I had no right to appeal to). I was locked out for a months, until I found a relatively popular blog post about Elance (over HN too, coincidentally), and an Elance representative who commented on it in comments. I replied to his comment and asked him why they did what they did to my account, and got account unlocked in less then a day. He probably figured out 1k$ bucks is less worth than negative publicity.

What really sucks is that so many services nowadays use the similar tactics to postpone the payments, effectively keeping the millions of dollars that don't belong to them. And they know that no one will sue them for a few thousands, so they're getting away with this bullying...

As a side note, and out of curiosity, is there a mechanism on HN I've missed? Soon after I submitted this link, it was around #3 on the front page, but when I refreshed 2 minutes later, it'd dropped to #25 and didn't rise again even though it was still quickly gaining points.

As an example, right now, this post has gained 199 points in 5 hours and is at #49, but the submission directly above it at #48 was also submitted 5 hours ago but has only gained 6 points. What am I missing?

When using Freelancer.com for the first time is a pain, i faced something similar when setting up documents, withdraw limits and so on. My suggestion not related to your main problem is:

1) you mentioned you were charged 1000USD because of service fee, because when you accepted the project they charged you. I assume your project was for a total of 10k and you are paying 1000, means you are in a basic plan where they charge you 10%. Suggestion: before to accept any big project, make sure you at least sign for the 25USD membership plan, on that way they only charge you 3%, so instead of pay 1000USD, you pay 300USD.

2) The customer service is horrible and schedule payments may fail any time. Make sure you have at least two methods to receive the money, my suggestions are: Moneybookers.com account and a Payoneer card. Both are fast and safe.

I know you are facing a very complicated situation, just keep moving forward and dont let you get down on this. Talk to your customer so may he be able to help you with a direct deposit from his side, i asked that once while ago and it worked. You can agreed with him to transfer back frunds from freelancer to freelancer account. Some fees apply but is better than nothing,

these guys are the biggest scammers ever, i completed a project for +6000 dollars on freelancer, and after beeing payed by the amployer, and the project beeing completely finished, THEY WONT LET ME WITHDRAW THE FUNDS !!!

they now claim that the employer of that project is under investigation, but when i contact them, they know nothing about this ..

note that at this point in time, the employer has no reason whatsoever, to spend anymore time fixing this matter, as his work has been delivered, and he has already made payment to freelancer, released the milestone, and finished the project.

but now, I have to tell me employer, to ask freelancer whatsup ?

so freelancer told them they didint trust that the account was china based, yet logins where made from other locations.so my employer explained why they needed a vpn in China, DUH

they then proceeded to request pictures of documents a copy of his id, AND a photo of him holding a printed out code wich they provided !! so he complied, but they them also needed a copy of hi's ENTRY STAMPS ON HIS PASSPORT? showing that he entered china ...

so my employer complied again .

(note that i knew all this from communicating with my employer at this point, as freelancer was ignoring me completely)

but now freelancer needed more documents, in english, wich my employer cant seem to get hold of in China (if i go to my Dutch bank and request chinese docs they wont comply either, duh)

anyway, so now we are stuck, beeing ignored, with +600 dollars in my account, wich they wont let me withdraw

notice the whole trivk here,

if you DO manage to get money from your employer into your account, they will simply refuse it when you try to wothdraw . at wich point you have to go haras your EX-employer, wich has not time for taking selfies, days on end, scanning docs, rescanning docs, over and over .

so their whole plan is simply for the employer to get sick of it, and stop trying, THEY HAVE NO REASON OTHERWISE, THEY PAYED AND GOT RESULT, DONE. and freelancer goes of with the money ..

> The builder ended up stealing our down payment and evicted us from the house.

i'm now in my 30s and many of my family and friends own their own property or own rental property. having been personally involved in and witness to many, many real estate deals including ones that have gone completely south and/or had extremely tenuous goings, i am wondering how exactly this is possible.

the entire system is regulated to prevent this sort of thing from happening unless you just fork over a bunch of cash directly to a seller, in which case - that's your problem.

the older i get, the more skeptical i am of these cases in which the protagonist finds himself in financial trouble over and over again. being a victim is incredibly addicting because nothing is ever your fault.

sorry if i come off as an asshole but those are my thoughts on this matter. he posted it publicly on a blog and it ended up on HN and i'm offering feedback.

Either this guy is unluckiest person in the world, or he is extremely poor in money/project management. I cant find much sympathy where you promise your lenders payment based on possible payment from Freelancer (you didnt even knew it takes few days to process wire for them which you would knw after quick Google search). You proved you are not doing your research properly and you are getting in situations like that. Previous lessons didnt teach you much imo and you are failing to research basics and you are putting all your eggs in one basket affecting you and your wife.

If you are good at what you do - contact guy you work for and ask him to send money directly. If you are really valuable to him - he will do this.

If you have time to write this piece - you could as well use your skill and do some content writing jobs that would pay you fair money while waiting for other payments/jobs.

I am not trying to defend Freelancer as they did a bad job here, but man - protect yourself, learn how to avoid sucky situations you are in constantly...

I'm not much clued up with legalities of these kinds of situations, but is there any law that allows him to threaten legal action? Surely if they're holding money owed to him due to lack of ID and they're violating their own policies by not accepting it, then something is legally applicable to the situation?

I prefer using oDesk over all other freelancing sites as it is very straight forward. And plus clients there prefer quality and are ready to pay high prices rather than choosing the cheap bids. I remember a customer paying me 400$ for a 10 line python script as he wanted it done quickly without problems.

Wish We could do something to help you, like may be bombard freelancer's facebook page to resolve your issue.

I have had terrible luck with Freelancer.com as well. One and done for me. In fact, the majority of freelancing sites miss the mark on making developers interest top priority as well as what they are supposed to do as the curator of a freelance site - to ensure projects are completed successfully.

Gun.io (http://www.gun.io) has been the only freelancing site that take that extra step to ensure that both sides are satisfied.

Please add Guru to this list. Actually all the commodity coding marketplace is destroying us in North America. What we need is a marketplace that isn't so greedy and doesn't let just anyone with $50 to create the next Facebook in. It also should be limited to citizens of Canada and United States.

The engineering world would be a much better place if more people built beautiful, easy to use GUIs on top of the confusing command line apps we all rely on. I switched to Tower for git a couple years ago, and seeing people struggle with diffs and rebasing on the command line makes me sad.

This software may be version one and still have some kinks to work out, but I love it anyway. Nice work!

Looks good but I'm curious, why is it not distributed signed with an Apple developer account? That means there is no kill switch Apple can flip in case it turns out to be malicious software. (Especially important for a package manager!)

This is one of those things where I don't have any idea what's going on. I assumed from the title it was about using your Mac to write apps for your Mac. But reading the page I can't even confirm or deny that theory.

Cakebrew is a godsend!! Starting using it now, I have to say I'm not disappointed!

Coming from MacPort, Homebrew actually is one of the simplest commend line tool I used to install an application on a Mac. Still, numerous occasions I had to spent a considerable amount of time troubleshooting warnings and library conflict issues. In situation that I need to setup development environment quickly such as PHP and Apache, Laravel etc, the last thing I wanted is more overhead.

I only wish something like this would exist for Composer, or is it already exist that I'm not aware of?

Congrats, this is wonderful. Now I just wish someone would create a utility to sort out conflicts caused by running MacPorts at the same time (or a straightforward way to migrate to just one or the other).

I've recently spent a couple of weeks doing a deep dive into Docker, so I'll share some insights from what I've learned.

First, it's important to understand that Docker is an advanced optimization. Yes, it's extremely cool, but it is not a replacement for learning basic systems first. That might change someday, but currently, in order to use Docker in a production environment, you need to be a pro system administrator.

A common misconception I see is this: "I can learn Docker and then I can run my own systems with out having to learn the other stuff!" Again, that may be the case sometime in the future, but it will be months or years until that's a reality.

So what do you need to know before using Docker in production? Well, basic systems stuff. How to manage linux. How to manage networking, logs, monitoring, deployment, backups, security, etc.

If you truly want to bypass learning the basics, then use Heroku or another similar service that handles much of that for you. Docker is not the answer.

If you already have a good grasp on systems administration, then your current systems should have:

If you have critical holes in your infrastructure, you have no business looking at Docker (or any other new hot cool tools). It'd be like parking a Ferrari on the edge of an unstable cliff.

Docker is amazing - but it needs a firm foundation to be on.

Whenever I make this point, there are always a few engineers that are very very sad and their lips quiver and their eyes fill with tears because I'm talking about taking away their toys. This advice isn't for them, if you're an engineer that just wants to play with things, then please go ahead.

However, if you are running a business with mission-critical systems, then please please please get your own systems in order before you start trying to park Ferraris on them.

So, if you have your systems in order, then how should you approach Docker? Well, first decide if the added complexity is worth the benefits of Docker. You are adding another layer to your systems and that adds complexity. Sure, Docker takes care of some of the complexity by packaging some of it beautifully away, but you still have to manage it and there's a cost to that.

You can accomplish many of the benefits of Docker without the added complexity by using standardized systems, ansible, version pinning, packaged deploys, etc. Those can be simpler and might be a better option for your business.

If the benefits of Docker outrank the costs and make more sense than the simpler cheaper alternatives, then embrace it! (remember, I'm talking about Docker in production - for development environments, it's a simpler scenario)

So, now that you've chosen Docker, what's the simplest way to use it in production?

Well, first, it's important to understand that it is far simpler to manage Docker if you view it as role-based virtual machine rather than as deployable single-purpose processes. For example, build an 'app' container that is very similar to an 'app' VM you would create along with the init, cron, ssh, etc processes within it. Don't try to capture every process in its own container with a separate container for ssh, cron, app, web server, etc.

There are great theoretical arguments for having a process per container, but in practice, it's a bit of a nightmare to actually manage. Perhaps at extremely large scales that approach makes more sense, but for most systems, you'll want role-based containers (app, db, redis, etc).

This is not impossible and can all be done and several large companies are already using Docker in production, but it's definitely non-trivial. This will change as the ecosystem around Docker matures (Flynn, Docker container hosting, etc), but currently if you're going to attempt using Docker seriously in production, you need to be pretty skilled at systems management and orchestration.

There's a misconception that using Docker in production is nearly as simple as the trivial examples shown for sample development environments. In real-life, it's pretty complex to get it right. For a sense of what I mean, see these articles that get the closest to production reality that I've found so far, but still miss many critical elements you'd need:

Shameless plug: I'll be covering how to build and audit your own systems in more depth over the next couple months (as well as more Docker stuff in the future) on my blog. If you'd like to be notified of updates, sign up on my mailing list: https://devopsu.com/newsletters/devopsu.html

I've just started using docker (I had to reimage my linode to take advantage of the recent upgrade). I've got nginx and postfix containers running. If anyone can offer some thoughts on the following points I'd be grateful.

1) I built two Dockerfiless on my laptop (one for nginx, one for my postfix setup) tested locally, then scp'd the Dockerfiles over to the server, built images and ran them. I didn't really want to pollute the registry with my stuff. Is this reasonable? For bigger stuff, should I use a private registry? Should I be deploying images instead of Dockerfiles?

2) The nginx setup I deployed exports the static html as a VOLUME, which the run command binds to a dir in my home dir, which I simply rsync when I want to update (i.e. the deployed site is outside the container). Should I have the content inside the container really?

3) I'm still using the 'default' site in nginx (currently sufficient). It would be kind of nice to have a Dockerfile in each site I wanted to deploy to the same host. But only one can get the port. I sort of want to have a 'foo.com' repo and a 'bar.org' repo and ship them both to the server as docker containers. Don't really see how to make that work.

What I think I want is:

- a repo has a Dockerfile and represents a service

- I can push these things around (git clone, scp a tgz, whatever) and have the containers "just run"

I've been looking for a way to integrate docker into my existing workflow. This integration takes nothing away from Docker and just makes Vagrant that much more flexible and valuable to teams already using it and newcomers. Can't wait to run this through its paces.

I currently have an ansible script which can set up a web-service on any Debian/Ubuntu box, and can be invoked

1) over SSH, or2) by Vagrant when provisioning a VM.

Docker, on the other hand, provisions it's containers from a rather simplistic Dockerfile, which is just a list of commands. The current solution to provision a container through ansible is rather messy[1], and shows that Docker's configuration doesn't display the same separation-of-responsibilities as Vagrant's does.

Luckily, this lets me use Docker as another provider through the Vagrant API. Woooo!

This is really cool. It'd be great to be able to piece together development environments from dockerfiles even quicker than you can now.

One of the great things about docker is that once you've played about for an hour or so you've already picked up most of it. It's not like Chef or Puppet configuring environments using VirtualBox and a VM is really simple. I wonder how fast this will make things.

This is the best piece of news to start my Wednesday morning that I could've asked for. I've been slowly converting the entire team in the office over to Vagrant, and am using it for everything. I recently started playing with Docker and wanted to explore deployment of our web apps using it, and now I'll be able to slot it in to my existing workflow! Vagrant is amazing :)

I have wanted this since Docker was announced last year. In my eyes the biggest gain of Docker for development over VMs is boot time. Now I can turn all my Vagrant VBoxes to Docker containers, and work much faster. Thanks to all the maintainers for the hard work.

Vagrant was a huge step forward for managing vm environments, but I'm afraid its integration with Docker is forced and misguided.

For instance, the idea of ssh provisioner does not jive with Docker. The better approach is run the container with shared volume, and run another bash container to access the shared volume. If you are just starting to look at Docker, I would recommend to use Vagrant to provision the base image, and leave the heavy lifting to Docker itself.

It's nice but it still doesn't fix the issue with people who have medium end machines where you develop in a VM full time and want to be able to run a VM within a VM.

For example with fairly old but still reasonable hardware you cannot run virtualbox inside of an existing virtualbox instance.

If you have a windows box and develop full time in a linux VM you cannot run vagrant inside of that linux VM because unless you have a modern CPU it lacks the instruction sets required to run virtualization within virtualization.

Now using docker instead of a VM would work but docker only supports 64 bit operating systems so anyone stuck with a 32bit host OS still can't use vagrant and has to resort to using raw linux containers without docker which is really cumbersome.

Is anyone else really offended by the name of their product? I mean why didn't they name it Gypsy or Hindu, maybe Eskimo or Shemale? So many groups out there just waiting to be further denigrated, trivialized and then commoditised. Fuck these guys.

I really don't want to be negative, but every browser update in the past year feels like a step back. This is true for Chrome and Firefox. Each update contains more "Sign in to your browser" stuff plastered everywhere. Eye candy is added. Useful configuration options are removed.[1] Many of these changes seem to be made with the goal of increasing revenue, not improving user experience.

Rule #0 of business is: Listen to your users. For browsers, one straightforward way to do this is to look at what extensions and addons users install. By far, the winner is adblock. Almost everyone who knows how to block ads does so. Therefore, if you are making a browser and you care about user experience above everything else, you will have ad blocking by default. That no major browser does tells us what their priorities really are.

Again, apologies for the negativity. This has frustrated me for some time.

Edit: I realize that if everyone suddenly started blocking ads, there would be darkness and chaos. But the current situation is only tenable because a small fraction of users have the know-how to get what they want. You can avoid ads if you are technically proficient or know someone who is. Everyone else has to put up with ads. Advertisers annoy millions if not billions of people, effectively subsidizing the usage of those with ad blockers. That doesn't seem fair to me.

Claims to be detail obsessed but the Mac version has the close/shrink/zoom buttons floating at the wrong height (like iTunes 10 briefly had until it recanted) and a title bar gradient with non-standard color and height (too short for integrated toolbar height and too tall for basic window title bar and too light in color for either) above a toolbar with the same weird gradient used again.

Windows and Ubuntu versions look much better; the Mac version should be fixed.

These people behind Australis are brave, competent and passionate. It shows in the remarkable experience they've built. I am a nightly user and I got to see these changes land one at a time. Hugs and cheers:)

The killer feature in Chrome was and is the multiprocess implementation. I use Firefox for intranet browsing at work and Chrome for personal browsing: the former is a sloth compared to the latter, and hangs even for seconds when loading a big page whereas Chrome will happily use as many of my 12 cores as it likes and it just doesn't even slow down.

Firefox has had their comparable hack (Electrolysis?) in some prototype stage for a long time but the thing is Chrome actually delivered it... years ago. This turned the roles into a catch-up game where Firefox tries to match Chrome instead of other browsers trying to match Firefox, and the setting has remained as such since then.

The difference is still astronomical and I'm not at all convinced that a new user interface could have much effect there. The browser UI has pretty much standardized 15 years ago.

Regarding incremental UI changes, I recall reading that Google staged Chrome's tab style and color redesigns over multiple releases, presumably to avoid upset users. I'm not sure whether to admire their concern for user confusion or to feel like the dupe of some magician's sleight of hand. :)

I've been surprised by the amount of hate for Australis. Are the people who criticise it actually using it?

I've moved back from Chrome to Firefox and I'm a big fan of the changes that they've made. A lot of clunky interface elements have been eliminated. I really like the customisable menu - it's a much better place to put semi-frequently-used add-ons than has been available in the past. The same paradigm works nicely on mobile, too.

The next things to tackle are probably the bookmarks and options dialogs, both of which are a bit of a pain. Chrome's searchable options were a game changer, and Firefox needs something equally easy-to-navigate.

Why have reverse tabs won? Has anyone done any usability test on them?On OS X, with a space left for the hit area of only 10px tall, it's really hard to drag a window that uses them. For context, 10px is about half the cursor's height.

I can see a reason for them on Win/Linux, but I find them completely unfit for the Mac. I guess people just maximize the window and leave it at that.

On the other hand, I'm glad there's still a distinction between the search box and the address bar. The annoyance of omnibar mistaking a url for a search query and vice-versa, even admitting it's a rare event, is not worth the trouble to me. Besides, educating the user on such difference seems important to me.

I've been using it for a couple months now. Maybe it's just me, but I really don't care too much about how Firefox looks. I actually quite dislike curved tabs, but I can live with it. What frustrates me the most is that I don't want to wait for the fancy animations to finish before doing something. If I want a menu to come up, I want it to come up.

Most of the menu reorganizations have very little affect on me since I usually use shortcuts. But waiting for some of these animations just hurts my productiveness and I've seen other people share my dismay. When changes like this hurt the people who know how to use their browser and want to simply get things done, it's saddening.

I really ask myself why they insist to put the tabs on top. With todays widescreen-displays, putting them left or right makes more sense for most use cases (at least with FF its possible to get this via add-on). OTOH they try to get rid of every pixel to get a bit more space while there are lots of at the left and at the right. Do all developers only work on 13" laptops today?

Is the lack of a unified search/address bar a deliberate choice or the result of some weird IP/patent thing? Every time I switch back to Firefox, it trips me up. AFAIR, it's the only major browser that still does this, right?

The post goes to great lengths to say they're doing a big, meaningful overhaul instead of a UI tweak. However, the examples given are nothing more than a bunch of tweaks (and not necessarily ones I like). It sounds like someone is trying to build up hype over nothing.

Its hilariously sad how new UI skin is touted as reimagining the whole browser.

Opera had fully customizable UI 12 years ago? And look at us now, somehow we moved back in functionality, even Opera nowadays is nothing more than a bad non-customizable Chrome skin :(

I dread the day most of the web stops working on Opera 12.16. I wont be even able to tune Chromium to my specific needs, after all it requires 16GB of ram to compile now (and that number will probably grow).

So many changes nowadays.. isn't there anything that remains the same, something we can rely on in this chaotic world lol :'(

Anyways, I was hoping the 'zooming' and 'back/forward page swiping animation (on a trackpad)' would be improved, to be more smooth/sexy like it is in Safari on OS X. Unfortunately this hasn't been changed however..

A bit of topic here. But does anyone know of any browser that has the URL-field and bookmarks 50/50 on the same row? I don't really need to see the full URL at all times, and I only have small amount of bookmarks. On top of that I use a fairly small screen, so it would be great to combine them.

It looks absolutely terrible having addon icons crammed into the same bar as everything else. What's the problem with the addon bar? There's even less view pane area now because the top area is so large.

The active tab curve is feminine, not in a good way, but I can live with it.

It does feel faster though, maybe they broke some addons that were slowing it down. Time will tell.

If they are so obsessed with details, how did they miss that some people need bookmarks star, but not the bookmarks menu button. For some (I'm sure completely arbitrary) reason, it is impossible to decouple these two buttons in Firefox, so instead of a little star in URL bar you get a honking huge two button combo that takes like 6 times as much space (which is even more limited in Australis since no addon bar).

Has anyone figured out how to get the tabs below the address bar again? This is horrible. I'm not really sure how to describe my hatred of this but it is there. I love FF but if the tab bar can't be moved I think I might have to switch to Safari.

"The Firefox UI is a moving target. It is under constant 'improvement', which means 'change' which means every few months I'm forced to upgrade it and shit has moved around and I need to re-learn how to do a task that I was happily doing before. This does not often happen with Safari. Their UI has been remarkably stable for many, many years."

Bookmarks sidebar button and Add-on toolbar...nope, no one uses those AT ALL. Now I have all that shit in the upper right-hand corner instead of lower left, near Start Menu in Windows and have to use keyboard shortcuts to get to bookmark sidebar, again in Windows. OS X I'm more adept with keyboard shortcuts but monkey trained to use what he's been using last fucking decade in Windows.

It is still frustrating that a bunch of plug-ins appear to have been disrupted because of removing the add-on/status bar, though. Not everything has moved to the new location automatically, and it's not immediately obvious how to get some of them back.

Australis is removing configurations options for absolutely no reason. If people want chrome, they'll use it. I don't care if mozilla want to ruin the default as long as they give people who want a normal browser a way out, but instead they are removing everything they can get their hands on.

Firefox is dumbing down. That's fine if it's IE, Chrome or Safari where the majority of users think said browser is 'the internet', but Firefox is for power users. People who like their privacy, who want to customise their software to their own use case, and who are rapidly running out of options in a world filled with shitty software that assumes the user has a room temperature IQ, removes options and metro-ifies everything while primarily existing to make money off your private information. Even Mozilla are not only dumbing down, but switching to privacy invasion mode with in-browser ads, and replacing their secure sync with one that drops it all on their servers presumably unencrypted as it works with a basic username/password.

It's still a browser from the people who allowed a mob to push Brendan Eich out because he donated money to support a political cause that was and is not only legal but reflective of the views of a large minority of Americans. It's pretty, but I think I'll pass.

If someone wants a service like this, in my opinion Earth Class Mail is a better alternative. They are not in any way "hip" or "web 2.0" and they would never in a million years talk about "disrupting" some established legal regime: but frankly, I'm not just "ok" with that, that makes me ecstatic. I thereby see this "how the post office killed digital mail", a dream I've been successfully living now for years through one of their "less disruptive" competitors, and can do nothing but laugh.

Instead, Earth Class Mail is a service that has been in business since 2006 and they operate within the existing legal framework of mail: they are effectively the kind of service that is required to live in an RV, where a third-party receives your mail on your behalf. To make this work, you sign forms from the post office that you send to Earth Class Mail, which are then kept on file to demonstrate they can legally open your mail. Maybe Outbox used the same process, maybe they didn't, but this article made it sound like Outbox is responsible for this legal framework: wrong, the ability to assign "open my mail" rights has existed for a long time.

Rather than having to have cars driving around attempting to "undeliver" your mail with some ludicrous three-day delay as Outbox did (at extreme cost to the service that calls into question whether their business model would even succeed, a fact mentioned in this article linked today, and limiting them to only even being able to think about operating in high-density regions of the country), you simply have your mail delivered to them. You can initially set this up with your local post office as you work on "moving" to ECM, and you can even do it temporarily if you just want to try it out (the post office will happily forward mail for as little as 15 days: again, this is a use case they actively support).

But frankly it is so amazingly relaxing once you "commit" and outsource your physical address. I have a lot of friends that move every couple years, and the idea that they have to change their address at the same time is silly: that is the most stressful moment to be trying to move mail delivery and you can't usually overlap the old and new addresses to buffer mistakes. In the most extreme situations, people who are traveling a lot (for whatever reason; maybe they have a job that requires them to be in random locations for weeks on end a lot, or maybe they are just kind of nomadic and stay with friends a lot as they travel the country) will tell you to send things to their current location. Outbox sort of helped with this, but the three-day delay sounded really irritating: ECM just solves this problem outright; you don't even need a real physical address at all.

Indeed, I seriously have now switched to using my Earth Class Mail address for everything: my drivers license even has that address on it (and yes, I verified with the people at the California DMV that there was no issue with this, and they technically do have my physical address on file; but their policy is to print the mailing address of the driver on the card), which makes it really easy for me to never get into an argument with anyone about what my address is: I effectively now live at Earth Class Mail in Los Angeles. The only people who know my real address are the US government (DMV and voting registration, though they happily send my voting materials to ECM), the power/cable/phone services to my apartment (again, bills go to ECM), and my health insurance company (they base pricing on where you physically reside).

They offer multiple locations around the country, so you can get an address vaguely near you or opt for one that "looks good" for your purpose (maybe you want your startup to look like it has an office in San Francisco, for example). With some of the addresses they are legitimate "street addresses" that can receive packages on your behalf, and you can either have the package forwarded to you or you can go pick it up from them if you need it now and live near enough to the location. (Though, with packages, I normally just one-off deliver those to my apartment.) (I wonder if you can have them open the package and take a picture of the contents... I never asked ;P.)

> So having worked on the Hill they knew of the USPSs well-documented inefficiencies. As they describe it, they knew that the USPS would not be able to work out its own problems, so perhaps naively, we hoped to partner with USPS to provide an alternative to the physical delivery of postal mail to a subset of users, hoping this would spur further innovation and cost savings.

What's wrong with this story is that Outbox didn't want to do the one thing that would keep it in business: make its customers fill out a form that allows a third party to accept mail on their behalf. There are plenty of businesses digitizing mail right now -- travelingmailbox.com, earthclassmail.com, amongst others. Why Outbox.com refused to go this route and instead decided to go out of business was their decision. There was a workaround (http://travelingmailbox.com/usps-form-1583-ca) -- they just opted not to take it. That's not the USPS's fault.

You mentioned making the service better for our customers; but the American citizens arent our customersabout 400 junk mailers are our customers. Your service hurts our ability to serve those customers.

The article blows right past this like it's insignificant. Junk mailers sort their own mail, drop ship it to the local BMC, and pay the post office for the privilege. This subsidizes the regular mail. If you can get rid of junk mail with a mouse click, it's not worth it.

The post office getting upset about ways to prevent junk mail from being delivered is just like a website complaining about Adblock.

I presumed this story was going to be about the USPS monopoly on first-class mail and would involve an armed raid to shut down competition, as has happened in the past.

"The monopoly is well enforced. The USPS can conduct searches and seizures if it suspects citizens of contravening its monopoly. For example, in 1993, armed postal inspectors entered the headquarters of Equifax Inc. in Atlanta. The postal inspectors demanded to know if all the mail sent by Equifax through Federal Express was indeed "extremely urgent," as mandated by the Postal Service's criteria for suspension of the Private Express Statutes. Equifax paid the Postal Service a fine of $30,000. The Postal Service reportedly collected $521,000 for similar fines from twenty-one mailers between 1991 and 1994."

I call BS on this article. We're not getting the full story.(Using a throwaway account since I'm travelling, and on a public computer; this article pissed me off so much that I had to respond right away)

Background: I used to work for a research lab which got a majority of its funding from USPS. Worked there for ~10 years. Interacted with the USPS engineering folks in Merrifield, VA very closely. I can assure you: the USPS has some very good engineers (in the true "engineer" sense of the word). None of them would call digital "a fad". Not one.

Now, to the article: "but the American citizens arent our customersabout 400 junk mailers are our customers." .... wrong! No postal employee will call it "junk mail". They all call it "bulk mail". I know, because I was corrected myself. :-)

"Digital is a fad"... wrong again. At one time, the USPS was the largest user of Linux; all of their mail sorting machines were running OCR on Linux boxes (they also were a huge SGI shop, with racks and racks of Octanes and O2s). Today, when mail cannot be sorted automatically, its image is sent to a remote data-entry site, where operators enter the address by hand. See the fluorescent barcode at the back? That's used to tag the mail and barcode it later, all digitally.

And finally: we, in our research lab, actually had proposed this "Outbox" style electronic mail forwarding to them back in 1998 or 1999 (the Internet was new). I don't remember the details, but there were some legal issues surrounding it that prevented it from taking off. Remember: the USPS is governed by laws (passed by Congress) that were written around the time of Ben Franklin. Fun fact: the average speed of letter sorting by hand (800 pcs/hour) was established by Ben himself, and is still the target for manual sorting.

Plus: I doubt the PMG would become personally involved in such small nonsense.

I know everyone wants to make fun of USPS; but for the price, they do a phenomenal job. People want them to compete with "the market", but don't realize that the USPS' hands are tied: they can't raise rates without approval from the Postal Rate Commission; they can't close post offices that have no customers; etc. etc. After the Civil War, when Congress wanted to give the veterans jobs, where did it send them? To the Post Office! I've heard (rumor) that even today, the USPS cannot use your discharge status against you for a job.

What's strange is, USPS actually seems to be aware of such services and perhaps endorses them. When I signed up for Virtual Post Mail, and proceeded to setup a mail forward on the USPS website, I got this message:

> Our records indicate that this address is a commercial mail receiving agency (CMRA). If you are forwarding your mail to a CMRA, please enter your private mailbox number (PMB) below.

So what did Outbox do to upset USPS?

Becuase what I fear is that my mail scanning service will suffer the same fate as Outbox, and suddenly leave me without a way to get mail. I don't live in the US anymore, but I'm still a citizen, I still pay taxes, and I still own property there. Without mail scanning there is no feasible way for someone in my position to receive my mail. How is the water company going to let me know that my bill is past due? How is the city going to let me know that my property taxes have changed? This isn't just a nuissance, this makes it impossible for me to do business in my own country.

This is one of the most disingenuous articles I've read in a while. I expected something a bit more subtle from a Yale alumnus.

They believed that their technology could actually save the Post Office money. If consumers started to opt-in to Outbox, or other services like Outbox, then the Post Office could receive the full benefits of the stamped envelope but never have to deliver those packages, which is one of the biggest costs for the Post Office. In fact, if properly implemented, when a customer sends a letter from Austin, TX to Alaska, if the Post Office knew that they werent going to receive the letter anyway, then the Post Office could forward the letter from Austin directly to Outbox, and never have to ship the letter across the continent.

This, for example, is just laughably wrong. Marginal cost isn't the bugbear of the USPS; universal service obligations are. As long as there's one person in Alaska who doesn't want to sign up for digital mail (possibly because they can't reliably connect to it in Alaska), then the USPS has to fly planes or sail boats up there to deliver the mail anyway. And as Outbox themselves discovered, moving mail around for individual customers is hideously expensive. It can be made efficient in cities where there is sufficient population density, but something like 1/3 of US addresses are on rural routes and of course delivery to those is more expensive. Even if half the customers on rural routes sign up for a service like Outbox, there's no promise that they'll be the ones farthest away from sorting offices, so mail carriers will need to travel more or less the same routes even if they are serving a lower number of customers, plus all customers will want packages delivered because packages have physical rather than purely informational content. Unfortunately, the profit margin on Package elivery is only about 1/3 that of first class mail delivery, which continues to decline in volume at about the same rate that demand for package delivery increases, leaving the USPS in a now-in situation which requires it to balance the books through cuts rather than investment and growth for the foreseeable future: http://about.usps.com/strategic-planning/five-year-business-...

I loathe junk mail with a passion and it really irritates me that I have no way to opt out from it, that the USPS is required to inefficiently front-load all its fiscal obligations as if they were payable tomorrow, and a whole bunch of other things. But by ignoring the legal operating constraints imposed by Congress on the USPS and the resulting necessity of dealing with bulk mailers, the author is doing his readers a huge disservice by offering trite solutions to knotty problems, essentially arguing that the USPS should pick up the costs of mail forwarding on behalf of a service which reduces the utility (and thus revenue) of the USPS's largest income stream (bulk delivery).

In 2014 Derek was selected for Forbes' top 30 under 30 list for law and policy and as a 2014 Global Leader of Tomorrow, for thought leadership and activism on NSA surveillance and innovation policy.

I'm pretty sure that if the USPS had been offering to open, scan, digitally archive, and destroy your physical mail for the last decade as Derek says they should have been, he'd be writing a similarly indignant rant about big government overreach and the crowding out of private competitors, regardless of assurances about strict siloing or privacy controls.

I was reading with great interest until I came to the heading about "disruption" at DC. Defending "ask later" practices is neither disruptive in the proper sense, nor particularly ethical.

The reporting here attempts to paint a picture of a slow, outdated USPS (and they surely are, to some extent) by way of obviously false comments ("digital is a fad"? Really? I'm supposed to believe someone at the USPS actually said this, in context? Let's critique the USPS, but let's not fabricate silly positions.)

Too bad, really, it sounds like a great idea and excellent technology, but marred by a shameful ideology.

Um, the existing industry doesnt use "disruption" negatively because they "don't speak the same language." Theyuse it negatively because you are talking quite literally about disrupting their business and probably putting them out of it.

Not talking about outbox and usps specifically so much as the fetishization of "disruption" the OP author buys into without question, as if the only reason to be scared of "disruptio " is a cultural misunderstanding, you're not with the program. Rather, quite obviously its bad for some existing business interests -- but also it's certainly possible to challenge the religious belief that disruptionof markets always leads to better outcomes for consumers or society as a whole.

They didn't think about the corner cases that are part of mail. What do you do with Certified Mail? Registered Mail?

Different classes of mail have different security and other business requirements, and involving some random "disruptive" third party has many potential consequences.

I'm not sure that I understand why this was necessary for the company anyway. I subscribed to a service in 1999 that did this -- you had bills sent to a PO Box and they would scan/PDF everything for you (even ship copies on CD-ROM). They would also pay your bills for you if desired.

I've seen several people mention Earth Class mail as a working alternative here. I am not sure if even their lowest monthly charge would provide enough value to be worth it for my own use case, but I do have a few questions regarding it for their users..

Do you still have a mailbox at your home?If so, do you still check it regularly?If so, do you still receive bulk mail drops from postal carriers in that mailbox? I'm talking about the ones that typically are addressed to "Current Resident" and such.

Unless you are in a position that you can confidently forgo ever checking that mailbox again, it seems that you would still be receiving those and be forced to deal with them. That would take a lot of the potential out of the service for me.

These guys are doing the classic startup shuffle--pick off the profitable people and leave the some chump (aka the government) with the unprofitable ones.

Although, I'm a bit skeptical that they couldn't "undeliver" mail profitably in a city-density area like Austin. Focusing just on businesses and apartment complexes, $640/mo + mileage gets you a person twice a week, for 8 hours a day, at $10 an hour. At $5 per month per subscriber you need to collect about 150 subscribers per month to make your nut.

Really? They couldn't get 150 subscribers serviced by 1 person over 8 hours? Sounds like they didn't control their rollout density or price correctly.

This sounds more like "Waaaaaah, we're only going to be a $20 million company rather than a $2 billion company. I'm going to have to hang my head in shame at the next Skull-and-Bones barbeque. We should shut down."

I haven't received physical mail where I live in 15 years. I have all my paper bills sent to paytrust, who scans them, and then pays them per automated rules (or allows me to manually approve.) Physical Mail just goes to 650 Castro Street in Mountain View, where I get a re-mail once a month wherever I am in the world.

I lived in an apartment for about 18 months, but never asked for a key to my mailbox as there really was no reason for me to open it.

The only packages I ever need to receive at my place of residence are via FedEx/UPS.

Its been obvious for at least a decade that the customers are direct mail advertisers and we're the product and we can't really opt out (I've signed up for a several opt-out services and I still get junk mail).

You do wonder why post offices - everywhere - not just in the US - have not done something with electronic delivery.

If I was Postmaster General I would like to see a post office ISP that only accepted mail from government departments, local authorities, banks, hospitals, doctors, schools and other agencies. From there people could setup a forwarding address - if people wanted to just check their mail from one account, e.g. gmail, they could have the 'important stuff' rolled into it. Or they could setup POP/IMAP.

There could also be a webmail where you would be able to have highest accessibility standards. Clearly the cryptography would have to be in place so only the sender and the recipient could read the mail - a 'virtual envelope'. Naturally there would be tracking tags so they knew if someone had read that 'final demand'.

As a competitive service for banks etc. wanting to send out statements it could work very nicely. Good for trees, too.

This is such bull. How could any postal service in good conscience (or legally for that matter) co-operate with a service which consists of opening other people's mail?

Any postal service should deliver mail to the addressee, unopened and unchanged. Any deals that put that mail in the hands of third parties should be a no-go to start with.

And they also shouldn't be offering this themselves. Closed envelopes stay closed. If that means postal services are not profitable, so be it. That was never the reason they existed in the first place.

I thought this was interesting (both that the space shuttle still used core memory, and that the challenger memory was retrieved and its contents recovered):

> Core memory is non-volatile storageit can retain its contents indefinitely without power. It is also relatively unaffected by EMP and radiation. ... For example, the Space Shuttle flight computers initially used core memory, which preserved the contents of memory even through the Challenger's disintegration and subsequent plunge into the sea in 1986.

I've read somewhere that they even had simple "computers" made out of stone structures that would then have these knot systems inserted into them, but for the life of me I can't remember what these were called.

This is from the series Moon Machines, which I cannot recommend highly enough. It's got interviews with the people who worked on Apollo hardware, testing footage I've never seen elsewhere, and some great anecdotes as well. It really gets down into the nitty-gritty design challenges they faced -- it's the Apollo program as the engineers saw it, not the astronauts.

It would have been interesting to see how the program was actually executed. Do the rings around the rope core somehow create a circuit that allows the processor to "know" in which order the 1s and 0s are in? Or is there some kind of mechanical process?

It must have been very easy to make mistakes with all that manual work. I am assuming the weave must have had to have been checked many times.

I suspect my disagreements with DHHs last couple of blog posts have more to do with what each of us has seen in the wild than with any actual disagreement in principle.

For instance, in my experience I more frequently encounter places that have way fewer tests than necessary and there is no consideration about how to verify requirements at all.

Further in my experience, the GUI and database layers are the least interesting parts of the systems I work with. They truly are parts that can and do get swapped out with some regularity.

I suppose if I worked mostly on systems where they were in essence gui's on top of databases with little logic in the middle, I wouldn't want to isolate those concerns either. It would be more trouble than it is worth.

For instance, when I write small unixy command line utilities I very rarely test anything in unit tests. What would be the point? I can easily define the entirety of the specification in example tests that utilize the utility as a black box. I still do it first though...

I feel like MVC is being treated as like the one true pattern to design your web app with and that's just simply not true. Rails is MVC and maybe that's all it should be for what it is intended for. Other projects might not be a great fit for such a simplistic view of the world and maybe that means Rails is not a great fit for projects that don't fit into the MVC abstraction.

Also, not every project is a web app and there are plenty of times where various testing approaches make a lot more sense than they do in Rails. It's too bad that a whole line of thinking about software quality is being disparaged because it isn't a good fit for Rails as DHH sees it.

TDD is a useful tool in the right context. Maybe that context isn't Rails.

It seems unwise to be telling a lot of smart people who care about software quality to "get off my lawn" so to speak, but I've never run a successful OSS project as big as Rails, so I probably don't have a clue about how to lead a community as big as Rails is.

The overriding message here is one of pragmatism. TDD, like a lot of methodologies before it, became gospel and people began practicing it in a dogmatic fashion without thinking about the best way to apply the principles to whatever problem is at hand. The spirit of TDD is that you have a safety net of tests to protect you from making changes in class A and breaking something in class B. If those are acceptance, integration or unit tests, great. If the code is cleanly organized and readable, great. Don't let zealots on either side of the aisle convince you to do anything beyond what makes sense to solve the problem that is sitting in front of you.

Any time we buy into a dogma at the expense of rationality, we lose. This has been demonstrated throughout history in human interactions with each other (via religion, politics, legal systems), the development of science and technology (see Galileo, Copernicus, the 19th century US doctors ignoring germ theory and killing a president).

Sometimes we create dogmas to try and move things away from bad ideas towards better ideas. Dijkstra's "Go To Considered Harmful" was one such effort. Gotos, as used at the time, were fucking terrible. They were used instead of higher level expressions like if/then/else, for, do/while, function calls. But the (at the time I was in college, early 2000s) refrain was tired and wrong (or misapplied). Sometimes, in some languages gotos can, in fact, be very useful, so long as their use is chosen deliberately and with care (see the C idiom of using gotos to jump to error handling/reporting code in functions).

In the end, nearly every development process runs the risk of becoming a dogma. Avoid that. Study the process, practice the process, and reason about where the process should actually be applied. And we already know that the answer isn't "everywhere and everytime".

In Java, the most obvious example of testing affecting the design of a class is the necessity of avoiding private methods in order to facilitate testing. While there are ways around this -- reflection, PowerMock, probably others -- they all tend to be ugly and hackish.

This has an effect upon the design of classes, because the easiest path is simply to make private methods package private. This is frequently not the ideal design, and taken to its logical extreme means that you will have no private methods.

I think unit testing is important, and do use it. The line for me, though, is similar to DHH's here: when the drive for unit testing affects the design of the software, that's when I tend to become less enamored.

Maintaining giant test suites and trying to keep them running fast is why I am so glad to not be using Rails anymore. Dynamic languages don't scale well for me because the testing is difficult to scale. With Yesod (a Haskell web framework I help maintain) I have a fraction of the need for unit tests. The compiler already gives me the equivalent of 100% code coverage for catching basic errors. I can focus efforts on testing application logic and integration testing.

Maybe the real problem is that we have crappy tools for hexagonal-oriented architectures; especially Rails. Classic Rails style dictates that ActiveRecord is Good Enough for your domain logic. This creates a sort of framework lock-in: inheritance is one of the strongest forms of coupling there is, especially when you inherit from classes you do not control. The framework superclass is a likely to be a relic of current-gen frameworks that we do not tolerate in the future.

The technological way out is to use a Data Mapper pattern ORM to isolate the domain logic and the persistence. But this approach won't catch on, because Rails devs have tasted the simplicity of ActiveRecord and aren't about to do more work to get the same result.

It is telling that many language communities eventually head towards amalgamating a collection of really good libraries in a low-coupling manner. This is still a fringe movement in Ruby, currently.

I echo his sentiment. Integration testing, especially when you have a JS frontend, makes much more sense. I never saw the point of controller tests and making sure a controller assigns variable @widgets with [widget] and all that nonsense. An integration test will identify all those problems and then some.

My least favorite example of when tests damage an API design is with dependency injection. There seems to be very little need to resort to dependency injection as a way to create a good API with easy to understand architecture many times, but dependency injection gets abused because it makes testing easier. You can supply your API with mock and stub classes at every turn if you use dependency injection everywhere, but the consequence is a more difficult to use API that requires the programmer who uses your API to understand more arbitrary and unnecessary implementation specific details.

For example, maybe I just want to open up an encrypted TLS TCP socket to a server. From a user perspective this could be really basic, you provide a library with an API that you provide with server address, port and handlers. It could be as simple as a few lines of code. But the dependency injection version of this would require maybe require creating an SSL Factory, which requires a 509x certificate provider, which requires a certificate storage locater. Then instead of an address you must provide it with an ipaddress factory method and a protocol factory which requires a list of available protocol implementors. Then 200 lines later you want to actually manage your connection and you must provide a connection manager and a byte buffer which itself involves tons of cruft.

Sometimes dependency injection is like a person walking around with their organs hanging outside of the body. When two people want to make babies they don't have to know low level biological mechanics of how sperm sends signals to a ready to be fertilized egg. They don't have to read and learn pointless documentation. They just insert the thing and everything usually works although under the hood it is maybe one of the most complicated processes in biology. That's how an API should work: making complicated things simple.

Does anyone know of a public example of a Rails application that does testing in the way that DHH says is good?

I'm tired of the talk talk talk talk talk talk of "proper" testing in Rails, yet the examples always seem to be hidden away behind company firewalls. I've only seen a couple Rails apps with Rails-Way test suites, and they were nightmares that took many minutes to run. But I have seen dozens of Rails apps written by opinionated Rails devs with strong views about what proper testing was... and the apps had no tests at all.

> the simple controller is forbidden from talking directly to Active Record [..] This is not better.

It is. The controller layer should be as dumb as possible, it shouldn't contain your (entire) application logic. It's a matter of single responsibility if anything.

Also, I find it very sad that we're still discussing the usefulness of the active record pattern. Other than convenience, it has none. It's a pain to maintain an application that uses it once it reaches a certain level of complexity.

And not just because of testability, it's a pain in the ass to replace/fine tune certain queries if you're calling active record methods in your controller.

Decoupling is one of the fundamental tenants of development, and provides far far more benefits than merely TDD. If he thinks decoupling is about TDD he's missed out on architectures that can be easily fixed when bugs show up by isolating causes in changed code, being able to extend without modifying core code (the open close rule) and managing regressions in general. How do you scale software to a team of developers without decoupling?

The only argument I've ever seen against decoupling is performance, and it's rare that argument makes sense in all but the most real time of applications.

I was unaware that people were actually trying to unit test controllers. That to me just seems like a recipe for endless frustration. Mock out a web request? Please don't.

Everything I've ever read about Rails refactoring indicates that your controllers should be skinny, implying they don't need to be tested, push all complex logic out into helper functions, lib classes or models and unit test those.

" but does so by harming the clarity of the code through usually through needless indirection and conceptual overhead"

This argument feels a bit thin and unsubstantiated for the general case. I can see his criticism of hexagonal design applied to Rails, but he's using that as a straw man to attack TDD. I think he could better criticise the limitations of TTD by directly examining applications of RGR and other TTD pronciples.

TDD is supposed to affect design in good AND bad ways. It is not true that TDD claims to have the best design, but the more testable.The first time I read about TDD it basically said testability > clarity.

A succinct code that you dont know if its doing the right thing is worse than more verbose code that you can easily verify what it does.

I do think that specifically with Rails, tests get so plentiful that they take long to run and it threatens the whole process. And the weight of testing models/controllers/integration is something that has bitten me before. Particularly, doing less integration and more model, because integration tests can be flimsy and slow an order of magnitude more.

Since my first web programmer job, in 100% of my projects tests grew so big it took them minutes to run, making me nostalgic about the speed of Java tests I had for my first programming job.

I don't understand why he has to equal TDD with the mockist approach to TDD without clarifying he is talking about the mockist approach and not TDD in general. Pivotal Labs for example is obviously a huge proponent of TDD, but has been historically been opposed to hesitant towards true, isolated, heavily stubbed and mocked unit tests.

That makes me wonder if he just doesn't have a differentiated enough view of TDD or if he omitted that on purpose to get more attention. I am also not sure which answer is would be more disappointing.

I don't really see DHH giving any arguments as to why designing for tests leads to poor design decisions. I suppose I can buy the argument that there are cases where this isn't true, but I can't think of any and he's not giving any. I would argue that Angular is a good example of how designing for testability creates good design decisions.

Secondly, I don't buy the idea that you should focus on integration tests over unit tests. Integration tests are important, but they're also the most expensive tests in terms of maintenance. Unit tests you can run with every code submit. You can run them multiple times per code submit. Integration tests take too much time for this to be practical.

In all, I'm tired of people making decisions based on what they're against. DHH is just being negativistic and defining his code design strategy around being against TDD and test-driven design. That's ok. But what design strategies does he support? He starts giving more information about that at the end, but I'm still left scratching my head and wondering what design philosophy he's actually advocating rather than what design philosophy he's bashing.

One thing I like about Java programmers is that they realize everything a class depends on needs to be passed to that class's constructor. There's really no way to avoid it. Change concrete instances to interfaces, and you have a nice testable class. Write an integration tests, write a unit test, they're both easy.

DHH is arguing from the perspective of a Rails developer working on a Rails application. It's no small kingdom but to discredit TDD as a practice for all software developers is short-sighted. There are enough counter-examples of the benefits of TDD in my own experience to make the claim invalid as a universal truth.

I don't see why TDD proponents make a big deal about not touching the database. It's as if they haven't heard of SQLite's in-memory option, in which the database is just another data structure in RAM, which is all that their extra layers of objects are. True, with that setup, you're using SQLite for tests and, say, PostgreSQL in production. But is that any worse than using your own mock objects in tests? What am I missing?

I have never had a problem with unit tests or int tests. As a rule I never use mocks, and everything fits into one of those areas. Either you have real data sources available (such as an in process db) or you make it a module that can be easily unit tested.

It's clear he is against TDD first, and looking for reasons second. I feel other factors are at play.

I certainly do agree about integration tests being important. I've also started moving towards using a live database for my tests. I set up a postgres database by copying over a master copy to a temporary directory and running a postgres daemon from there. It takes ~100ms and with fsync turned off it makes for snappy tests. If it starts getting to be slow I can always move it to a ramdisk.

Here's a library I wrote for golang which wraps it all up in a convenient package:

>> it's a mistake to try to unit test controllers in Rails (or similar MVC setups). The purpose of the controller is to integrate the requests from the user with the response from the models within the context of session.

Well said Now, if somebody from Salesforce.com could understand this and stop forcing their customers to write these useless tests for the controllers.

I really identify with some of the points he's making, they're observations I've made myself so it's nice to see someone with his clout bringing them up.

I wonder about the design thing though - our code is in some ways a document of the circumstances surrounding it. Does it make sense to have it conform to some Platonic ideal, which we corrupt when we alter it to make it more testable? I'm really not sure about this, but I doubt it. Code ultimately needs to work in a given set of ways and that's our primary concern with it. Making the code "pure" (or just "easy to read" if you like) is a service to other developers who come along later. So, the tradeoff is testability for intelligibility. I can imagine a lot of scenarios where that tradeoff is a rational one.

Black box (functional) testing is the way to go. I created a flow style of testing, which allows "Fast & Thorough Testing". This is a javascript & jasmine extension, but the concept can be applied to other languages.

Rails, rails. rails. Everything is about Rails. David is a good guy, but I'm starting to wonder if he's ever built a moderately complex system, involving integrations, message queues, several data stores, a handful of third-party libraries and APIs and deployed on more than 3 machines.

TDD is a tool to manage complexity. It's an advice, not a recipe. Like any technology - it isn't a substitute for thinking.

In the early 2000s I was living in Minnesota during the much-hyped arrival of Krispy Kreme. There were long lines at new stores and lots of doughnuts in the office every day. The hype prompted new franchises and an overabundance of doughnut shops, but unfortunately the demand dropped as the novelty wore off, and they ended up closing several stores and production facilities in the area.

Obviously Krispy Kreme weathered its over-expansion and is doing well today. I think the question for Groupon and Herbalife is if they have a business plan (and the financial reserves) to make the shift from exponential growth to more measured success.

I do appreciate the analysis and the message of the post that the total sales can mask problems, however neither of the examples are failures. Herbalife may expand to other products instead of additional companies. Amway, probably the most successful implementer of Herbalife's multi level marketing scheme, is over 50 years old and worth $11+ billion. Even the fictitious waffle shop can adjust by selling other products. In short, creating a large distribution chain and reaching to many customers is tremendously valuable. The company can leverage that base and move to a sustainable model. If you're as successful as Groupon, you'll have plenty of runway to try different things.

I think Groupon is a slightly different case to Herbalife. Herbalife had/has a pretty substantial pyramid scheme component to it.

Groupon, IMO was a trend. Trends have a ballistic trajectory. The reports of hard selling and unhappy customers confuse the issue, but I think the heart of the problem was that Groupon was popular for a while and now it's less popular.

Our default business systems don't know how to deal with that kind of a thing. A company with a 2 year half life. All our financial systems and our valuation of companies are built around companies that are lang lived, practically immortal (in the sense that impacts net present value). But, not everything is like that. A film or a computer game is often produced by a firm that forms and the disbands to create a single thing. It has all the things a normal company has: employees (including some highly paid stars), investors, assets, liabilities, etc. It only exists for a short time.

Crocs was like that too. A product that made a splash, sold a lot of brightly colored shoes at a great margin and then contracted.

I think the problem with Groupon wasn't Groupon. The problem was the whole system trying to treat it like Strabucks when it was more like Star Wars. Star Wars wasn't a failure because it stopped making money.. ..wait. Bad example. Exceptions prove the rule.

Financially, a company is the NPV of all its future cash flows. In practice, the system assumes those cash flows will continue steadily forever, growing if the company is healthy. If they try to swallow a company that will exist for just 4, they choke.

I think we need to be on the lookout for things like this. The world is getting fast paced. Maybe we need to be able to deal with 4 year companies.

The article clearly makes the case that tracking churn is critical to analyzing the overall health of a business.

And the reason why churn is so critical for cash-strapped startups is that new customers are so expensive. In many cases it is an order of magnitude more expensive to sell to new customers than existing customers.

This reminds me of FBs ad business - they keep introducing new ad spaces, expanding their offering to more platforms, introducing logout ads, video ads, scrollable install ads, an ad platform, etc. With all of that they are just masking the fact that a large portion of their advertisers stop using them and that most of their products start losing popularity relatively quickly.

> Note: the graphs included in this article were sourced from Pershing Square Capital Managements initial presentation on Herbalife, available here.

Bit of a submarine there, eh? Anyway, this doesn't make a case against Herbalife. In fact, it suggests that their data is saying the opposite. Look at the part where they talk about popping:

> Along with Japan and Israel, this same pattern shows up in Spain, France, Germany and several other countries that Herbalife has entered.

Now look at their chart of # of countries against revenue. Herbalife is apparently up to almost 80 countries. Even back in the '90s, they were in 20-50 countries. Let's be generous and say that 'several other' is 5 (I don't have the patience to go through Ackman's propaganda), and note that this will be an exhaustive list since it's being assembled by people with literally hundreds of millions of dollars of incentive to make the picture look as ugly as possible; that's 10 countries that 'popped'. Out of 80. If the other 70 have not popped, that does not seem like Herbalife will have problems in the future.

(There's also the problem that if each country can only be soaked for a short period before 'popping', revenue should not be regularly going up! It should be flattish as Herbalife desperately opens up ever more countries to replace disappearing revenue from the popping countries.)

They did the emerging markets growth strategy and it worked to help juice their numbers for a few years until Android and iOS totally destroyed all but their core customer base.

The real danger in this strategy is not that it grows an unsustainable business, just that the sustainable portion is much smaller than the peak and if you forecast up and to the right growth forever, it eventually doesn't happen and you have budget shortfalls and layoffs.

Hyper growth is exciting and gets you headlines, but sustainable, steady growth is probably a happier long term situation for moth businesses.

I love talking about business models (I run a bootcamp in SF helping people to visualize them), so I figured I'd chime in here...

>Eventually theyre going to run out of countries to enter, and that will be the end of Herbalife if they dont figure out a more long-term, sustainable business model.

This statement is pure speculation. Why hasn't any of the same analysis been done on Groupon? I'm not sure why the article conflated the two stories of Groupon and Herbalife, when their data sets and underlying assumptions are clearly very different.

> You should be able to demonstrate sustained growth in a single market segment, whether it's a geographic region, a certain type of customer, or something else.

Isn't this why diversification exists? Why companies like GE, P&G, and now Google, have a massive portfolios of companies, as opposed to one single product that drives all growth? I'm having a hard time understanding what the takeaway is here...

The Herbalife graphs seem to indicate that in the countries they enter there definitely is an initial temporary pop but business doesn't go down all the way to zero after that - it seems to settle down at a residual steady state. That could be sustainable if the 2 graphs provided (Israel, Japan) are representative of all the countries they enter. After they enter all available markets, their revenue will settle down to the sum total of all those steady states.

It makes sense, given that VCs are often betting on big buyouts. This is the "castle in the air" theory. At some point, I think the pendulum will shift into investing in companies, that while they may not have cashflow here and now, at least have the potential to generate cash.

My startup is struggling to have profit (we have revenue, but no profits yet, although we have growth of revenue), and many, many, many times we felt tempted to pull that sort of stunt (pyramids, freemium abuse, shady ads, etc...)

Bill Ackman was wrong about Herbalife. According to wikipedia, he lost between $400 million to $500 million by shorting Herbalife last year. What that means is that other investors disagree with his (and this article's) analysis of Herbalife.

As a theoretical aside, I wonder if it'd be possible to have a typesystem based solution to these kinds of problems - where variables coming from the user (or from another program) are considered 'unsafe' and the compiler refuses to let exec() or whatever use them until they've been through a cleaner/tester of some kind... (OK, I know PHP doesn't have a compiler as such - but a static checker of some kind could work the same...

As a side note, this is how many SQL injection attacks happen too. You almost never want unfiltered user input to directly interact with your system. A while back, I did an episode on how SQL injection can lead to code execution by using unfiltered user input on a LAMP stack. See it @ http://sysadmincasts.com/episodes/21-anatomy-of-a-sql-inject...

Directly passing user data to the command line is highly dangerous.It allows an attacker to execute arbitrary commands on the command line [0].escapehellarg [1] has to be used to Escape a string to be used as a shell argument

Did someone else notice that that github search returns mostly results where exec is completely disconnected by $_GET? And that, i'd say, the last 20 pages contain the same thumbnail script that contains simply the "exec" string and multiple uses $_Get somewhere?

Maybe it's still an asinine error somewhat common, but i wouldn't take that search results as proof of how common it is...

I already have a thing called "all seeing eye" that picks up on things like this on a pre-commit script in SVN and tells people to go away if they do something stupid. It's done some wonderful work so far.

If you often split the bill on anything Rent/Utilities/Drinks/etc you should check out Splitwise [0]. They have a Web/iOS/Android apps and it has been a life saver. I've used it for a little over a year now and it makes all expenses a breeze. It keeps track of who owes who what and can (when enabled) even simplify the debts so if person A owes $10 to person B and person B owes $10 to person C then Splitwise will report that person A owes person C $10.

Sperner's Lemma is probably my favorite mathematical result. It is easy to explain, and pretty easy to prove, but it has a higher than average number of "aha" moments along the way. The mathematics is not too intimidating - probably the scariest thing is the generalization to n dimensions, which requires induction. It is also fun to explain the connections to Nash's theorem and so on, as invoked in this article.

Myself and my g/f moved into a house with 2 friends. We figured a fair way to divide the rent and expenses.

We measured out the sqft of the bedrooms, and used the percentages of each room from the total bedroom sqft to split the rent and any fixed expenses like internet and lawn care (since those costs don't change based on per person usage). We applied the same to cleaning supplies and things like light bulbs in common spaces.

(Chosen Bedroom SQFT / Total Bedroom SQFT = Chosen Bedroom Expense %)

For power, since that can vary month to month based on usage, we split it by the number of people in the house.

This month, one of the friends is moving out and we're taking her room over as an office, so the power split changes to 3 way instead of 4 way, but the other friend won't see a change in her other expenses because her percentage of the bedrooms floor space is still the same.

Easy on math and easy to manage.

All expenses are tracked in a Google Doc shared to all of us, and at the end of each month I run the numbers to calculate who pays what to who to balance it out.

Interesting that almost all the comments thus far are regarding the actual content of the article and intention of the page rather than the relative novelty page itself. I think that alone is evidence that the Times has hit its stride as a 21st century news outlet.

This app did not make much sense to me because I was sitting at my computer making choices for myself and my non-present roommate. So the related article with useful visual explanation of Sperner's Lemma may be more useful reading if you don't have a quorum of your roommates to work with this morning:

Can someone explain why the outcomes differ when the roommates are in a different order? Imagine an apartment with one nice room and one terrible room, total rent $1000, and two roommates, one rich, one poor. The poor roommate will always chose the less expensive room, the rich roommate will always chose the nice room unless the price difference is greater than $500.

If the rich roommate is Roommate A, then it converges to a 50/50 split, each paying $500. If the rich roommate is Roommate B, it converges to a 75/25 split, with the rich roommate paying $750. What's going on there?

[EDIT: Also, I've just discovered with three roommates that the results may be displayed without the first roommate ever choosing at all. ???]

I've lived with tons of roommates the last 5 years and do not see the utility in this... There are jut too many factors to take into consideration.We are currently in a 4-bedroom house with other amenities. Two of the bedrooms are very small, and two are large. First, each room starts at a base amount to take into consideration the shared spaces... Which is $50 for us. So... $700/mo. total - $200/mo. for all 4 rooms = $500/mo. leftover. Then we split the rooms into amounts that we agreed seem fair based on the sizes. As a median, $500/4 = $125. The large rooms are $165, small rooms are $85. Add in the $50 from before and we have:

Large Room: $215Large Room: $215Small Room: $135Small Room: $135

Basically, you're paying a base price for the shared space, then an additional price for the approximate square footage of your room. We avoid going into exact measurements to keep it simple.

As far as utilities go, it was a nightmare to organize them every month and collect what everybody owed, especially when people started owing over a couple months past. Making sure to record payments, update the balances, ask for payments, pay all the bills, and sometimes organize which bills get paid first because we owed was a disaster.

So, for utilities, I calculated the yearly average of all of them divided by the minimum number of roommates (4 in our case), added ~10-15% on the top, and that was our new monthly utility bill. Everybody pays the same amount every month. Rent is due on the 1st. Utilities are due on the 15th. I track payments using Google Docs so they have access.

Also annoying was the process in which people bought supplies for the house and wanting reimbursements... Sometimes people would buy the same things and we'd have a stock of them, etc... So that extra cushion in the utilities goes toward purchasing house supplies in bulk.

All of the money goes into a PayPal student account with a debit card. Everybody has access to Mint.com to see transactions/balance, but only I have access to the actual account. Anybody can use the debit card, and I get text/email alerts for every transaction. If we end up having a large balance, we can improve something in the shared spaces or repay debts from roommates that had to be kicked out for owing (face it... that money is gone - Always collect a security deposit!).

We believe this system is fair and scaleable, as it rewards people for sharing rooms rather than making them pay more for the same space... But as the number of people grows, the cushion in the utilities grows to a larger percentage... Making it possible to improve our shared space living conditions as a reward for putting up with too many roommates. For instance, supplying everything we need for a full garden in the back yard, etc.

I apologize for this long-winded explanation, but I hope this helps somebody. Compared to the way of normally splitting up a house, this is much simpler. Everybody knows what they pay in rent for their room, they know how much they'd pay if they moved their partner in, they know how much utilities are, and everything is transparent.

The main thing I haven't figured out is chores/shared duties outside of just cleaning up after yourself. We have a whole 100+ year-old house to maintain/rehab, and there is a lot to do. Working on some kind of system where people can choose from a revolving list what chores they want to do to accumulate their points. First come, first choice. HabitRPG looks promising in that you can get groups together and tasks go up in value the longer they aren't done.

Also, having repercussions for late-payments/lack of chores. We're not big on charging interest/fines to those that cannot afford it already... But will soon be experimenting with taking away access to house amenities... For instance, the upstairs (nicer) bathroom, the high-speed Internet, our shared access to the local Hackerspace, etc.

This seems buggy to me, but I'm not familiar with the algorithm. In my test case, neither roommate agreed to pay more than $562.50 for room 1, but the result was that roommate A had to pay $578.13 so it seems kind of arbitrary.

Reminds me of an old puzzle: how to split bread in two parts so no one is upset? Alice splits, Bob chooses the one he wants. If both prefer at least half, Alice will try hard to split as close to 50/50 as possible and, most importantly, no one will feel bad because everyone had his say.

I think the two people just act as a tiny corporation in the bidding process, and enter a single set of bids, just like the single people. Equally, if one person wanted two rooms, they would enter two sets of bids. I further think that this could easily end up putting everyone in the same rooms, at the same prices.

In terms of paying for the bedrooms, this seems fair. Yes, the couple are each paying half as much for the room as when it had a single occupant. But they're also only getting half the use of the room, because they're sharing it.

In terms of paying for the shared spaces, this does seem less fair, because everyone else now has a slightly smaller share of the use of those, whereas the couple jointly has two of those slightly smaller shares. I think the way that this ultimately shakes out is that the other housemates might decide that none of the room choices offer value for money, and move out of the house.

Of all the things to appear on HN! In a kind of joke response to a recent house search with a couple of friends, I made a site that splits the rent based on room size. Rough round the edges, but feel free to try it out. http://whogetsthesmallroom.com/

"Fair" in the good old American capitalist sense of the word. Money is supposed to be a proxy for desire here... The one that desires the room more gets it because they are willing to pay more for it. The reality is money isn't an equal proxy for desire. The poorer roomate can want the big room with every fibre of their being, but be casually outbid by a rich roommate with money to burn.

Edit: as expected, HN swings decidedly capitalistic and my comment has been downvoted.

If you're looking for a language that will enable "bottom-up development", where you gradually define, in Paul Graham On Lisp style, a language optimized for your problem domain, Golang is not the language for you.

Similarly, if you're looking for a language that will read and write like a specification for your problem domain, so that writing your program has the side effect of doing half the work of proving your program correct, Golang is also not a good choice.

What's worse, those two approaches to solving programming problems are compatible with each other. Lots of sharp programmers deeply appreciate both of them, and are used to languages that gracefully provide both of those facilities. If that describes you, Golang is a terrible choice; it will feel like writing 1990s Java (even though it really isn't).

There are two kinds of programmers for whom Golang will really resonate:

Python and Ruby developers who wish they could trade a bit of flexibility, ambiguousness, or dynamicism for better performance or safer code seem to like Golang a lot. Naive Golang code will outperform either Python or Ruby. Golang's approach to concurrency, while not revolutionary, is very well executed; Python and Ruby developers who want to write highly concurrent programs, particularly if they're used to the evented model, will find Golang not only faster but also probably easier to build programs in.

Systems C programmers (not C++ programmers; if you're a C++ programmer in 2014, chances are you appreciate a lot of the knobs and dials Golang has deliberately jettisoned) might appreciate Golang for writing a lot like C, while providing 80% of the simplicity and flexibility value of Python. In particular, if you're the kind of programmer that starts projects in Python and then routinely "drops down" to C for the high-performance bits, Golang is kind of a dream. Golang's tooling is also optimized in such a way that C programmers will deeply appreciate it, without getting frustrated by the tools Golang misses that are common to other languages (particularly, REPLs).

At the end of the day, Golang is overwhelmingly about pragmatism and refinement. If you're of the belief that programming is stuck in a rut of constructs from the 1980s and 1990s, and that what is needed is better languages that more carefully describe and address the problems of correct and expressive programming, Golang will drive you nuts. If you're the kind of person who sees programming languages as mere tools --- and I think that's a totally legitimate perspective, personally --- you might find Golang very pleasant to use. I don't know that Golang is a great language, but it is an extremely well-designed tool.

It's ironic that the "better" the language (for some hazy definition of "better") the less actual work seems to get done with it. So Go can be pretty annoying at times, and so can Java (I've said before that I find the two almost identical, but that's beside the point now); and C is horrible and completely unsafe and downright dangerous. Yet more useful working code has probably been written in Java and C than all other languages combined since the invention of the computer, and more useful code has been written in, what, 5 years of Go(?) than in 20(?) years of Haskell.

Here's the thing: I am willing to accept that Haskell is the best programming language ever created. People have been telling me this for over 15 years now. And yet it seems like the most complex code written in Haskell is the Haskell compiler itself (and maybe some tooling around it). If Haskell's clear advantages really make that much of a difference, maybe its (very vocal) supporters should start doing really impressive things with it rather than write compilers. I don't know, write a really safe operating system; a novel database; some crazy Watson-like machine; a never-failing hardware controller. Otherwise, all of this is just talk.

This is a great rant. Having recently done the 'golang' tutorial on the site a lot of it resonated with me. I have a slight advantage in that I worked at Google when Go came into existence and followed some of the debates about what problem it was trying to solve. The place it plugged in nicely was between the folks who wrote mostly python but wanted to go faster, and the folks who wrote C++. It was a niche that Java could not fill given its structure.

In a weird way, it reminds me of BLISS[1]. BLISS had this relationship to assembler that "made it manageable" while keeping things fast. BLISS was replaced by C pretty much everywhere that C took hold (one theory is that BLISS is the 'B' programming language, Algol is the 'A' language, personally I think BCPL is a better owner the the 'B' moniker). The things that C has issues with, memory management, networking, and multi-threading, Go takes on front and center. It keeps around some of the expressiveness and type checking that makes compiling it half the battle toward correctness.

Now that was kind of what the Java team was shooting for as well but with limited success. I feel like between Go and Java we've got some ideas of what the eventual successor language will look like. For me at least that is a step in the right direction.

These sorts of posts are profoundly boring. I know people will up arrow it -- some sort of spiteful "Down with Go!" contrarian thing, when they aren't talking up rust -- but it isn't because the content is interesting or illuminating, but rather as some sort of activist thing.

This particular piece (by a high school student, as an aside) starts off trying to create a surrogate for generics in Go.

Don't.

Here's the thing -- most of these posts are not about people making real code, but people making -toy- code. Where every function is all things to all people.

The number of times I've needed a generic abs in my life -- zero.

The number of times I've needed a double floating point abs in my life -- every single time.

That's the thing about generics and real, actual world code: Your types are generally much less amorphous than you think. They really are. This illusion that everything needs to be everything just does not hold in the real world.

The agreeableness of Python is the reason I settled on it after experience with a bunch of other languages. I was starting to find software engineering to be a chore and was losing interest in it until I started using Python. No language is best at everything, and all the ways python isn't the best have been worked around without much bother.

Go did not strike me as a language I would enjoy writing, despite the strengths it surely has in some areas. I would just be giving up other strengths I prioritize more highly.

I think people gave Go a lot of credit just because it was designed at Google. People have high respect for Google engineers and so they assume what Googlers have designed must be flawless. So they take for granted ideas like lack of exceptions or lack of operators' overloading being a good thing, even though I'm quite sure they would be quick to criticize such BS if that was a feature in a language not coming from Google. But, in the long run, the language must defend itself on its own, without the authority of people/organization behind it.

> Ive been using Go since November and Ive decided that its time to give it up for my hobby projects. Id still be happy to use it professionally, but I find that programming in Go isnt fun in the same way that Python, Haskell, or Lisp is.

I can't leave go, but indeed I think absolutely the same. The thing is that GO was designed to replace c++ thing that absolutely failed. So we have a python/ruby replacement with very old patterns (I think manual checking errors ie.).

Indeed I think go is an awesome language but sometimes I really feel like a monkey repeating myself over and over again.

As much as I want to emphasize that this is not the right way to think about programming in Go, I do want to point out that the example has a lot of extra code that isn't needed. Indeed, it's not really possible to have a bug of the kind the author wrote if written properly:

It is so funny to see that somebody else got annoyed by Go the same way as I did. I could not get over the fact that I cant pass any type to a function other than interface{}. When you do that it just delays the problem and I have decided to not have a generic function but have multiple specific one. This violates the DRY principle but at least works. I am still looking for best practices with Go. I think the biggest advantage of that languages is the fact that Goole uses it in production and the libraries are well tested. The community is great too. Clojure has a way smaller community that limits the usage of that environment quite a bit. I agree with the author on Lisp is being more fun than Go, but that is a single dimension of the entire problem.

> The idea is that theres simply no way that any group of designers could imagine how people will want to use their language so making it easy to extend solves the problem wonderfully.

On the other hand, there's no way of making a language so easily extensible while also maintaining relatively uniform idiomatic, design, and style conventions across a language community. Go very heavily favors the latter.

> In Lisp, CLOS (Common Lisp Object System) was originally a library. It was a user defined abstraction that was so popular it was ported into the standard.

CLOS is actually very similar in some ways to Go's structs. Both emphasize encapsulating data inside an "entity" (object/struct), and separating the notion of behavior from that entity. To quote one of the language authors from GopherCon[0]: "Interfaces separate data from behavior. Classes conflate them."

Lisp was "designed" (if you can call it that) around the principle that extending a language should be as easy as writing a program in that language. Go was designed around the principle that there should be only one dialect of the programming language, for the sake of cohesiveness. It's a stronger assertion of the Pythonic motto, "There should be one, and preferably only one, obvious way to do it".

Every design decision has trade offs. Since the author only covers the more obvious negative aspects of those 2 features, I'd like to cover the more subtle positives.

> Extensibility

One really great thing about that is how consistent it is. Every func in Go has a unique identifier: the package import path and the func name pair. This makes it possible to build tools like godoc.org and the `doc` command that let me look up the exact behavior of (unfamiliar) code.

In this case, I can go to godoc.org/math/big#NewInt or type `doc big.NewInt` and see:

// NewInt allocates and returns a new Int set to x. func NewInt(x int64) *Int

In fact, it's so predictable that I don't have to think. As long as it's Go code, if I ever run into an unfamiliar func F of package P, I can always go to godoc.org/P#F or `doc P.F`. No searching, just an instant lookup. If I need to do 30 such lookups to solve a task, it makes a big difference. This applies to _all_ 3rd party Go packages. That is big.

On the other hand, something like `x + y` is magic to me. I know that computers work with bytes at low level, and I want to be _able_ to understand what happens on that level. I understand and accept such magic for built in types. But I certainly wouldn't want to be reading a 3rd party Go library code that says `x + y` on its custom types. Where would I go to find out what that custom plus operator does? How does one make an unexported version of a plus operator? There'd be less consistency, more exceptions and rules, more variations in style and less tools that can be built to assist/answer questions about Go code.

> The Type System

I don't have time to cover this atm, but there are some advantages to the explicitness and verboseness of Go's approach. Can you think of any?

I'm not suggesting it's the best it could ever be, just that there are both advantages and disadvantages that should be considered.

All languages suck, at some level. Whether this matters to you is a matter of need, willingness to compromise, and benefits to you, for some definition of you. This often leads to people writing their own language if they can which generally only pleases themselves.

Go's strengths are tooling and libraries. Especially when you see the well written libraries all revolving around internet protocols, encodings, and content generation. It makes it extremely easy to write performant web sites and http API endpoints which many people do daily.

All that said I hope Go adds D like compile time code generation and static typing or [D,Rust] gains better tooling and HTTP service oriented libraries and APIs that are as well written as the Go standard library packages are. Secondly I hope [D,Rust] sees how awesome having a common automatic format, build, and test tool is. In so few language is the testing package as simple as Go's. In so few languages is the build process as easy as Go's.

If you guys like Python and are having second thoughts about Go, take a look at Nimrod.

Also, I personally look at Nimrod as a faster and slightly cleaner Python with a few extras, and I don't try to break the compiler using all of its cutting edge features. If you use it that way you will be happy.

I've the same feelings about Go. After working several month on a side project with Go, I have given up, because I've felt not as productive as with Java (not to mention Scala). There are lots of annoying things: missing generics leading to a lot of uselesse code which could carry bugs. Without generics you can not create an Option datastructure, but have to return nil (the million dollar mistake of Hoare). Without generics you can not write concise object/functional code like someCollection.map(i -> i * i). Go has no good support for immutability. Mocking is awkward, because you have to code your mocks by hand. Unicode handling is a pain.

That is why Go attracts mainly people from scripting languages (they get a bit more type safety and better performance) and C (they get a bit more type safety and less errors). Coming from other languages Go is not that attractive. I'm hoping for Rust to succeed.

A 16 year old kid decides that Go isn't suitable for writing an embodiment of Principia Mathematica. Surprise: Haskell is a better Haskell than Go.

Go is a something like C, but with a few more features plus GC and come neat concurrency facilities. Why would anyone expect that to be anything like Haskell, or even to have a type system even approaching that kind of power?

Does anyone actually use CLOS? It's always held up as this great example of language extensibility, but my impression is that since it's not part of the core language Lisp folk tend to create their own abstractions instead.

I have a problem with his extensibility critique. I don't agree that having generic types are good for the language. The problem in using polymorphism is that your users are going to abuse polymorphism. There will be class hierarchies that will confuse and abstract from the algorithm at hand. You will be mutating and mangling objects rather than dealing with the issues at hand.

Reading a go source file is like following a for loop. It's quite technical. Reading a Java source file can be a case of trying to figure out the high level abstractions of the program.

umm.. correct me if I'm wrong, since I'm not well versed in either Haskell or Go, but doesnt the Golang version read better in terms of scoping. I mean just by reading the Go version, I know what it will evaluate to, but in the Haskell version I dont know the precedence order just by reading.

Very well put. I tried Go now for a while and I was wondering why it doesn't appeal to me like for some of my friends. By reading through your response I found the reason for my gut feeling. I guess I belong into your first categorie ;).

Fantastic! We need more of this kind of thinking. "No computer required" should be the default, not the exception.

I've been playing with some ideas in my head for a while now that we should be able to build most apps this way, and not just prototypes. If we can extract the default widgets and behaviour from applications and make them reusable and connectable without code, then we'd be taking steps towards a real revolution.

I am really excited about the direction we're going in with products and tech like this.

What a great idea! I've been using Balsamiq for a few years now, but I usually sketch my idea out on paper before translating that to the editor manually. I would kill for this kind of hot-linking functionality for general web apps with the off-the-cuff feel of hand-drawn sketches.

Just browsing your site, it looks like your main app is used like that, albeit apparently for more refined mockups (I'm sure I'm mixing up terminology here, but I'm not a designer so bear with me). Next time I have to design a site I'll try integrating Marvel with my Balsamiq workflow and see what happens.

App looks nice (can't check it out, no iDevice). I like the simplicity of it. Do you plan on adding any vision capabilities, like auto-detecting rectangles as candidates for buttons, circles as candidates for radio buttons, etc? I can also see you app used to make quick&funny games, which you can send your friends. Think taking a picture of a street, with a bar door as a button, and then taking a picture inside, making the bartender a button, then a picture of the drink, etc...

It needs access to a hell of a lot of files in my dropbox account. Definitely not something I want scouring my files, just so it can show a few mockups - is there any way to restrict it to being able to access just a single folder, or is it really an all-or-nothing problem with Dropbox auth?

Stumbled upon this last week and used it in conjunction with FiftyThree's Paper app to create a pretty impressive prototype that I was able to view right on my device.

Bit more of a process when using Paper, as it only seems to be able to export an entire notebook in PDF, requiring me save pages individually in order to use them in the app. I seem to recall reading somewhere that MarvelApp was capable of dealing with PDFs, but I'm obviously mistaken. Would have been a nice feature, though I'm not sure if many others would benefit from such functionality.

Not sure about this specific execution, but the idea of having a unified input surface is very appealing. 3D gestures have also been floating around, but seems like this is the closest I have personally seen to something that is close the current reality of computer use.

That said, the rotation of the image gesture seems overly confusing. Mouse or trackpad seem like they would be an easier way to go...

The idea of turning the surface of the keyboard into a trackpad is very cool, you could almost get rid of the mouse. Except for, how do you click the mouse pointer? Accidental clicks of keyboard keys is probably what makes it impractical.

Well this is interesting. A while back I got a motion sensor to try some of this on. My current keyboard[1] was originally designed for the Thinkpad laptops/computers and has the joystick between the GHB cluster. The mouse buttons are below the space bar. That can speed somethings up by the track pad on the Macbook is much more expressive. In terms of full disclosure though I do tend to collect obscure HCI devices (like the Microsoft Commander if you remember that one!)

Cool, I had the same basic idea at one point to do this with an Arduino. I wanted to have infrared emitters below the keys, and have an infrared sensor somewhere slightly in front of the keyboard (but at finger height). Your fingers should reflect infrared light back to the sensor. Not sure how accurate it would be, but I think it would be accurate enough to capture sweeping gestures such as your hand moving up/down for scrolling. Something like this wouldn't be precise enough to replace a mouse, but definitely useful enough to improve workflow (scrolling, switching between workspaces, apps, shortcuts). If the keys were clear, I'd imagine it would be even easier for the sensor to determine where your hand is based on how much infrared light your hand is reflecting back from different positions.

Strange that the "swiping & pinch to zoom" gestures (starting at 41) are exactly opposite of how they work in an iPad.

I can understand swiping being backward - people can disagree about "move the camera" vs. "move the paper" - but pinch-to-zoom-in is wrong in all contexts.

ah, microsoft. you've got this cool research, but somehow you manage to make the usability all wrong. how did these people not notice that they implemented pinch-to-zoom backward from how it works on their phones?

This looks very similar to the Leap Motion. Both use infrared sensing, although they seem to use it in different ways. I think the setup in the video could probably be recreated using a Leap Motion integrated into the spacebar or other keys. I'm curious to see what they'll do with it (whether it'll actually become a product.)

MS is putting a lot of emphasis hand gesture recognition since their Kinect 'surfaced'. Meanwhile they're still quite behind Apple when it comes to touchpads. I wonder whether it will pay off in the future - I for one would quite like a surface type tablet with this sort of keyboard such that one doesn't have to lift his hand for gestures.

They could probably double their resolution with an IR sensor under the keys, transparent elastomer, and IR-transparent plastic for keys. But they did mention sampling at 300Hz was an achievement so maybe they're running into some embedded issues.

I find this functionality is very cool, but does it have to be motion sensing? I find physically moving your hand off the keyboard and back on to be somewhat tiring, especially since it's doing apparently the same thing as "scroll down". But since the hand stays on the left side, how about adding a button underneath the left palm, where by moving the palm slightly the command is performed? This I think takes away any inconsistencies of "motion sensing" while still keeping your efficiency at a high (not to mention keeping your hands comfortable!).

all well and good, but Microsoft refuses to let users of the Microsoft Natural Ergonomics Keyboard 4000 to swap the middle "Zoom" to "Scroll"... so I don't see how Msft PM's will let this fulfill it's promise.

PS: the Keyboard drivers ship with key remapping software, they just don't let you remap the zoom, though if a user is willing to hack the config they can do it manually :(

I did not enjoy working with it, one of the few times I had to use a Mac to do something. However, I think these new forms of interaction will definitely play an important role in the way we will use (mechanical) interfaces in the future.

I feel like Mercurial still has a chance to become a real force if it can provably solve some of the real issues with git. It won't do to just have a nicer CLI since most people are used to git by now.

Here are some of git's real problems:

* Performance issues with multi-GB git repos* Handling of large binary files* Submodules - Mercurial has subrepos, but I don't know how they compare

While a lot of tutorials mention how complicated git is in contrast to mercurial I - being a git native - feel the other way around. Git is intuitive with a small number of concepts necessarry to grasp my whole workflow.

Using this workflow with mercurial is really frustrating when I do it - the occasional pull request for a python-based project. A git branch as a concept is really simple, the mercurial ways I just cant wrap my head around (granted I only use it occasionally).

As a platform I like how all the porcellain in mercurial is implemented in a high level python. I can only wonder how productive writing custom porcellain commands in mercurial is given that interface.

"That is the experiment that makes rat-running experiments sensible, because it uncovers that clues that the rat is really using-- not what you think it's using. And that is the experiment that tells exactly what conditions you have to use in order to be careful and control everything in an experiment with rat-running."

This kind of response by lab animals to stressors in their environment reminds me of the "Rat Park"[1] experiment, in which rats who were stressed out from being kept isolated in small cages (rats are social animals) had a tendency to become addicted to morphine, but those who lived in a pleasant environment did not.

The hypothesized mechanism is very interesting. Perhaps mice vary their behavioral response to pain depending on how safe it is to express distress from pain. VikingCoder has already given the definitive comment: further controlled experiments will tease out how important this effect is over what range of behaviors. There are many, many, many kinds of mouse experiments (I am currently doing reading on behavior genetics research on learning and intelligence and fear and aggression in mouse model organisms), and some will be more influenced by what the interesting preliminary report here suggests than others.

See the book How Genes Influence Behavior[1] for a fascinating description of how we can learn about human behavior by studying mouse behavior (and even by studying fruit fly behavior!).

> Its a primordial response, he says. If you smell a solitary male nearby, chances are hes hunting or defending his territory. If youre in pain, youre showing weakness.

I don't know about those results, but this explanation sounds preposterous: it doesn't work if you're hunted down by a lioness (which does the hunting while the male lion sleeps); it also doesn't work if you're spotted from the air by an eagle (which is one the main hazards if you're a small rodent).

Why does every behavior has to be explained in terms of survival, we'll never know. Stephen Jay Gould was so good at debunking those easy explanations; how I miss him.

I think this serves as a good example of (one of) the issues with animal testing. The scientist in this article doesn't think that we need to "redo decades of animal research", but it certainly casts doubt on previous results.

In general, testing on animals isn't effective. Considering the amount of suffering it inflicts on animals, I'm not sure that it's morally defensible. The British Union for the Abolition of Vivisection has a [good resource][1] on the issues.

It sounds like each test involved the same person both giving injections and sitting in the room. I wonder if they can somehow mechanically inject the irritant without human involvement. Then they could have male/female participants sit in the room without knowing if any animals are present. This would help distinguish whether the rodents were reacting to a general difference in scents, or males and females give off different signals when handling rodents.

Their conclusion that "Male odors seemed to act like painkillers" seems like only one of several possible interpretations?

My first reaction was that the possible presence of other male mammals provided a selection bias for individuals who displayed less pain. Amongst individuals of the same species, showing pain would be a sign of weakness, obviously, but it might be a similar marker for possible predators. So perhaps all the animals are feeling the same discomfort, but those who sense other males are less willing to perform their discomfort behaviourally?

It's sex, not gender distinction. Gender - what you identify as, sex - what you are. Males that identify as females would still throw their test off. Yah, nipicking. And call me paranoid, but this seems to have agenda....

1) It's the brain's way of telling the body: "Another male is nearby, do not show weakness or submission because that male may try to dominate you and you may end up getting into a fight or dying." I'd assume this evolutionary response would only work for same species scents. Apparently it works between species. Men have a tendency not to get into fights with confident men.

2) During a fight with another male, the mind needs to ignore pain in order to continue on and win the fight. In all species, males who didn't have this trait lost battles, died, and did not pass on their genes. Males who had this mutation didn't let the pain get to them, went on to win the fight, lived, reproduced, and passed on these characteristics.

They've also formed a consortium to promote this processor, of which Google is a flagship member (http://openpowerfoundation.org/). The expectation (or hope, or fear, depending on your point of view) is that Google may be designing their future server infrastructure around this chip. This motherboard is some of the first concrete evidence of this.

The chip is exciting to a lot of people not just because it offer competition to Intel, but because it's the first potentially strong competitor to x86/x64 to appear in the server market for quite a while. By the specs, it's really quite a powerhouse: http://www.extremetech.com/computing/181102-ibm-power8-openp...

Can someone explain the benefits of POWER8 as compared to Intel? I though the volume of POWER8 chips being low (as compared to the exceedingly powerful Intel and Arm chips) would mean that innovation in that area would be low as well.

So Presumably, Google will manufacture their own POWER8 CPU. But Who made them? TSMC? GloFo? Not IBM since IBM will be exiting Fab business in the near future.

I am going to guess this Dual CPU variant will be aiming at Intel Xeon E5 v2 Series. The 10 - 12 Core version cost from anywhere between $1200 - $2600. Although Google do get huge discount for buying directly from Intel and their volume.

Assuming the cost to made each 12 Core POWER8 to be $200, that is a potentially cost saving of $1000 per CPU, and $2000 per Server.

The last estimate were around 1 - 1.5 Million Servers at google in 2012 and 2M+ in 2013. May be they are approaching 3M in 2014/15. Even with most of those are low power CPU for storage or other needs. One million CPU made themselves could be savings of up to a billion.

Could this, kick start the server and Enterprise Industry to buy POWER8 CPU at much cheaper price? And Once there are enough momentum and software optimization ( JVM ) it could filter down to Web Hosting industry as well.

I wonder if POWER8 based servers will be available for the mass market? I'm not sure whether Google is interested in commoditizing POWER8 servers or just participates in the OpenPOWER foundation to ensure that POWER-based servers will suit their needs. The fact that Google is open about their new motherboard hints at the former, but it's not much.

I wonder how non-Google-scale developer could even potentially get to use POWER-based servers. Will they be available from the regular dedicated server hosting companies? What OS could they run? RHEL does support POWER platform, but for a hefty price: https://www.redhat.com/apps/store/server/ CentOS doesn't, presumably because all the POWER hardware CentOS developers could get is either very expensive or esoteric. That likely means I don't have to consider using POWER-based servers for at least 3 years, right?

Two things. First, slightly off topic: is there anyway this could be a negotiating position with Intel, on price?

Second: while many CPU cores (with enough IO) is great for large Borg map reduce jobs, I am curious to see if Google will develop/use better software technology for running general purpose jobs more efficiently on many cores. Properly written Java and Haskell (which I think Google uses a bit in house) help, but the area seems ripe for improvement.

So they're saying it's easier to use a brand new incompatible little endian Linux personality, with associated new toolchains and new ports of low level stuff etc compared to the standard Linux PPC64 stuff...

Sounds kind of surprising even if IBM did some of the bringup work ahead of time, but maybe they've got little endian assumptions baked in many internal protocols/apps.

250W TDP in a package that size.. as the article correctly states, it's about how many FLOPs you can get inside a rackmount case. that TDP alone is going to mean that you wont be able to put that many in a single case.

a dual socket board, 500W on CPUs, 600W with everything else.. the power supply would have to be something special, but the biggest challenge there would be getting the energy (ala heat) back out of the box..

GPUs have similar TDPs and issues - that's why the HSFs on top of them are so massive (and hence GPUs have a bit of an advantage here - they have the entire PCIE board to fit their cooling hardware on)

finally, 4.5ghz? what the hell? in one clock cycle, a beam of light wouldn't even get half way across the board (EDIT: not chip). branch/cache/TLB misses may literally kill any reasonable performance you might hope to get out of it. intel get around this by having years of market leading research in branch predictors, caching models, etc. and it's going to be no mean feat to match that.

i know IBM aren't exactly new to this game. but AFAIK x86 has always been faster, clock for clock, than POWER.

that said, i hope my concerns are misplaced. i'm hoping intel get some competition in the server room. it will be of benefit to everyone.

"Tails or The Amnesic Incognito Live System is a security-focused Debian-based Linux distribution aimed at preserving privacy and anonymity. It is the next iteration of development on the previous Gentoo-based Incognito Linux distribution. All its outgoing connections are forced to go through Tor, and direct (non-anonymous) connections are blocked. The system is designed to be booted as a live DVD or live USB, and will leave no trace (digital footprint) on the machine unless explicitly told to do so. The Tor Project has provided most of the financial support for development. Laura Poitras, Glenn Greenwald, and Barton Gellman have each said that Tails was an important tool they used in their work with Edward Snowden"

The planned update to Wheezy is important because it brings an update to OpenSSL. Updating OpenSSL on Squeeze is time-consuming and buggy, and a later version is required to run several software packages including Bitmessage.