I discovered Threes only today, and I had no idea it looked so similar. I searched a bit and it appears as if 1024 is also inspired by Threes, so my game is probably the last of a long chain of clones :P

2. Get 128 on the top in the middle two slots. Really any edge works, but I'll say top for simplicity. Keep a semi-large value on the side with one open slot in top row to prevent sliding. The other side on top is used for staging

3. Only use left, right and up. Never let a smaller value get trapped. This rule can only be violated to avoid filling the top three rows with the fourth empty.

I lost this game because I mistakenly filled the top three rows, forcing me to use a down. I think this strategy is viable to win however

The game is super addictive and I figured out a way to get to 11000 in the first hour. It's actually an excellent analogy for social network-type products and any business really. If your users belong to different clusters and similar clusters don't meet, there is little value and the network doesn't become more valuable for anyone. By focusing on the same corner scenario you help similar clusters find each-other consistently and thus amplify value to each-other. I took over one corner and keep stacking on to it with blocks of increasing value, essentially never moving out of it. If you move out of your corder, a different cluster takes hold in it and then everyone in that corner will hesitate to buy into you, even if the other cluster is small - it becomes a thorn in your butt. Great job! I learned something new today (Plague has also been very educational for me so far, but for viral dynamics.)

One minor nit: if the board is full, but you can make a move that will free up a space, you can make that move and the new tile will appear in the newly opened space, but then the game immediately ends afterward. EDIT: OK, apparently this only happens if there are actually no more moves; as long as moves remain the game continues.

Also, hitting "space" reset the entire game; I'd expected it to either do nothing or add a tile without moving.

I've found simply pushing up, left, down, right, I can routinely outdo the score I can attain by actually playing with thought.... Kind of disappointing, but then I lost hours to this already. Fabulous game!

I played a round, and got to 512. But toward the end I wasn't sure if I was actually playing with a strategy, or just pressing buttons randomly with some thinking involved.

So I built a script to randomly press the arrow keys[0]! I let it play a few games, and the highest it got to was 128 before consistently losing. So I guess you'll need some decent strategy to get to 2048.

This simple game really shows how amazing the human mind is. I've never played any variant of this game before and when I first started I was blind to the mechanics of how this worked. I was moving so slowly and would fill up the board quite quickly.

After playing this game for 2 hours now my fingers are moving faster than my conscious mind can really follow. In my last game I was doing combo moves taking "2" blocks to "64" blocks in mere keystrokes. I've been surprised several times when things work out.

I think I have the beginning of a solution. The end result should be 2048 on the top-right corner (for the explanation). Always keep the highest value there. To do that never do a down without the right column filled and never do a left without the top row filled. Within those restrictions keep the top row in ascending order by building numbers on the left of the second row, so that they will match above and cascade right in powers of two.

I did this successfully for a while and then had no other option but a down without the rightmost column filled and lost my placement.

EDIT: A variation of this that works well is to only do up, left and right if at all possible. This keeps the highest values on top making it easier to match top-down. I've been stuck on 512 though.

A simple strategy of building along one side will get you to a 1024 tile every time, but getting to 2048 seems largely dependent on RNG. If you get tiles spawning where you want them, it's very easy. If not, it can actually be impossible.

I just repeat Left => Up => Left => Up until I can't move any further, then "Right => Up => Left" before going back to the left up combo. Keeps the largest values top left and the values decrease towards the bottom right. Stacks the values well for combining and you can easily get 512 => 1024 without even looking.

I got a single square with 256 in it and a score of 2016 before I got bored. It's addictive and fun but a single game is much too long. I would like to lose a few times and improve but with a game this long I get bored before I've even lost the first time.

Maybe Monday morning is finding me too pedantic, but this is similar to Threes (http://threesgame.com), not just like Threes. There are some pretty obvious differences, from the movement of the tiles to the requirements for tile mergers (multiples of 2 rather than 3). The comments thus far do not make that distinction.

The best thing about this game is that you can press the arrow keys randomly for about five minutes and anyone walking past thinks you're incredibly good at the game. Just make sure you lose after they walk past.

I managed to get 2048 on my first try. Different enough from Threes to stay interesting, but a lot of the same strategy applies. In Threes though, the number that appears has a higher chance of being a bigger number (24, 48, even higher later) the longer the game goes on. There were a few times I was almost stuck and was able to get out of it by continually moving in the same direction and sucking up 2s.

Topic other than discussing the irresponsibility of "outing" a guy using the clever tricks of using his name and public records look ups.

> A libertarian, Nakamoto encouraged his daughter to be independent, start her own business and "not be under the government's thumb," she says. "He was very wary of the government, taxes and people in charge."

> What you don't know about him is that he's worked on classified stuff. His life was a complete blank for a while. You're not going to be able to get to him.

Growing up and living in the D.C. area, I'm constantly surprised at the paradox of the deeply conservative anti-federal government types who work for the government - directly or as a fed contractor. Who'll rattle off about privacy issues before hopping on the bus to their job working on an NSA contract at a Fed contractor...that sort of thing.

I've even pointed out point-blank that their salaries are paid for by the same taxes they rail against incessantly and are met with blank stares or wry grimaces before they launch into an extended soliloquy about "values" or personal responsibility or some such. I've even had folks in the military swear up and down that some military benefit program isn't a result of tax payer dollars but mysteriously appears out of some kind of pay differential sacrifice they've made instead of working in the private sector.

It's rather bizarre and I guess to Nakamoto's credit, he actually did something about it in a sense.

edit meta-response to the replies indicating that perhaps his close contact with the government is what motivated him to develop bitcoin, I think that's plausible. What we don't know is if he developed this philosophy before or after working with the government.

I'm curious though, in the general sense about people who have a fundamentally anti-government philosophy, then take roles supporting and building up the same government they clog their facebook feeds rallying against.

Being labeled Satoshi regardless of truth is pretty much going to get you robbed, kidnapped or killed. This dude lives in this town and has $400M of untraceable currency? The article gives his name, face, address and relatives. You can be sure as hell that somebody will do something stupid to try and get to it.

I wouldn't wish this label upon anybody, it's exactly why the community tries to avoid speculating about it. It's extremely irresponsible of the newspaper to publish this truth or otherwise especially in such vivid detail.

So Newsweek outed a guy who allegedly owns half a billion dollars in pseudo-untraceable, digital cash? I hope they're also going to chip in for a permanent security detail...

More seriously, I think they could have done a better job reporting on the identity without giving so much away:

* A picture of his house is posted, identical to the one in Google Street View

* The license plate is relatively clear in the high-resolution image

* His exact address has more or less already been discovered using only the information in the article

* Full names of family members were used

It's a legitimate story -- understanding Nakomoto's motivations for creating Bitcoin as discovered from his past is a worthwhile topic. (For example, would your feelings about cryptocurrency change if it turned out Nakomoto was a high-level NSA operative?) But, again, it could have been reported in a way that didn't compromise his identity so thoroughly.

Quite an odd article for such an important (if true) expose. The only reason I think its possibly true is Gavin's vague tweet.

50% of the article deals with material about bitcoin that is redundant to anyone whose been following it for more than a day (like most here).

45% deals with "Dorian S. Nakamoto"'s family, personal background and that he's a libertarian oddball, with a penchant for math (but no other significant accomplishment within it or CS) who (other than possible being the Satoshi) has led a fairly normal, middle class southern california lifestyle.

The remaining 5% or so details a brief encounter with the man, in which he neither confirms or denies it.

I'll probably be slammed for this, but I actually think it's a pretty good piece. Maybe it could do without the picture of Satoshi's house, but it probably wouldn't be so hard to find the house anyway if you know you're looking for an actual "Satoshi Nakamoto in Temple City".

If the dude hadn't used his real name, we'd probably still be wondering who he is. So I think the indignation is a bit misplaced. It's not at all uncommon or nefarious for news reports to be written about people who don't particularly want the coverage.

I'm going to skirt the ethical questions being asked, assuming everything done to get this information was legal and such and the accuracy of it lives within the bounds of journalistic integrity (of course none of that necessarily makes it ethical, but like I said, circumventing that question for now).

All that said, by the number of reactions I'm reading here I get the impression that in the Bitcoin world someone with a significant amount of wealth has to fear for their life? What is the difference between Satoshi Nakamoto and any other individual of significant wealth, i.e. Bill Gates, Warren Buffet, Rupert Murdoch, etc. While I don't have any exact addresses or other information about these people on hand, I'm sure I could get it rather easily.

The emphasis I keep seeing is on how he has $400 M of "pseudo-untraceable, digital cash" and assume the concern is something along the lines of it would be more difficult to extract that much from Bill Gates if you attacked/kidnapped him and get away with it vs Bitcoin, which you could theoretically extract the keys guarding the coin from the victim and quickly transfer out to other wallets without much issue.

So, the gist I'm getting, is that in the world of crypto-currency if you get wealthy ... man you better watch out because people are going to be gunning for you to steal your coin by force if they ever find out where you live. Live In Fear. If this is the great future of finance you all envision, then I really wouldn't want any part of it.

*Side note, I really don't believe any of the above but given some of the responses I've seen I think we need to take a step back and examine the conclusions that would result from some of the statements being made.

"What?" The police officer balks. "This is the guy who created Bitcoin? It looks like he's living a pretty humble life."

That quote smells totally fake to me. There's just no way some random office would know what Bitcoin is, and even if he did, that's not something a police officer would say. I don't know what that says about the rest of the article, but that quote doesn't read very factual to me.

> "Dorian can just be paranoid," says Tokuo. "I cannot get through to him. I don't think he will answer any of these questions to his family truthfully."

What the hell, if many family members are so eager to forward questions from the press to him and spill anything they know, I can totally understand Satoshi doesn't answer them truthfully. I also feel very sorry for Satoshi's position in which he doesn't seem to have anyone to talk to truthfully :/

> Of course, none of this puts to rest the biggest question of all - the one that only Satoshi Nakamoto himself can answer: What has kept him from spending his hundreds of millions of dollars of Bitcoin

Isn't it obvious? It would destabilize the market and begin a huge frenzy to find out who he is, and he knows it. Now the latter is a moot point, but I can totally understand he doesn't want to backstab his brainchild.

Besides, who says he didn't mine other coins early on anonymously for his own use? Wasn't the point of Bitcoin that you can't know who's who? If he did this and got some money, he totally deserved it.

> "For anyone who's tried to wire money overseas, you can see how much easier an international Bitcoin transaction is. It's just as easy as sending an email." -- Bitcoin's chief scientist, Gavin Andresen

No actually it's only as easy as Western Union is. You either have to take a huge cut due to localbitcoin or other markups for markets that avoid the normal route of... registering at an exchange, giving them all your details which will take weeks to months, whom will then place major limits on what you can transfer (no more than a $2k-$10k) and potentially crash burn and be robbed while you wait for your FIAT.

So actually it's like transferring money between two Western Union branches that are both in war zones and staffed with employees taken from the DMV.

I'm torn about the article. On one hand this seems like a horrid breach of privacy and a terribly dangerous thing to do. On the other, even if they just said he lives in his family home in California, people were going to find out all this information.

Half of me thinks it's better everyone knows they were doxed all at once.

* A smart man, according to the article* Who worked for the government at some point in time (according to the article)* Who's name is Satoshi Nakamoto* Who values privacy (so much that he used his real name LOL)

So apart from the stalking and extremely irritating privacy breach this article shows about Newsweek's[1] journalists and chosen course of action, proves or states nothing.

He didn't admit anything, but seriously... Even he did, why do we care at this point?

[1] I was holding NS in low regard anyway. Now it's as low as it gets in my eyes.

Honest question: it seems like Nakamoto wanted to keep his identity secret. If that's true, then why did he reveal so much information (by implying that he was part of Bitcoin) and allow a photo of him to be taken, instead of saying "I have no relation to Bitcoin"? It doesn't add up.

> "He was the kind of person who, if you made an honest mistake, he might call you an idiot and never speak to you again," Andresen says. "Back then, it was not clear that creating Bitcoin might be a legal thing to do. He went to great lengths to protect his anonymity."

Except that he used his full, real name. That is what seems so odd to me.

If it really is him though, I'm very much afraid this article just destroyed his life...

A lot of people are calling this "doxxing" which it isn't - identifying someone based on their ACTUAL NAME and profession isn't doxxing. It may be horrible, irresponsible, dangerous, I don't know - still forming an opinion about that, but that's not doxxing as I know it or see it defined anywhere.

So Newsweek hires paparazzo's now? The need to disclose everything about the guy and call several family members etc is really wrong. Which I could undo my click...., no need to invade his privacy so much, "fun to know because interesting" is not a good enough reason to write the article...

Meta: this submission has 811 point and was posted only 5 hours, yet it is at #11 position. Is this the regular HN algorithm at work or is it weighted down due to its controversial nature? (I'm not trying to imply there is any conspiracy... I actually remember reading that submissions with a high vote to comment ratio are weighted down but I'm not completely sure)

"Tacitly acknowledging his role in the Bitcoin project, he looks down, staring at the pavement and categorically refuses to answer questions."

"I am no longer involved in that and I cannot discuss it," he says, dismissing all further queries with a swat of his left hand. "It's been turned over to other people. They are in charge of it now. I no longer have any connection."

"What?" The police officer balks. "This is the guy who created Bitcoin? It looks like he's living a pretty humble life."

- I do not believe this exchange took place. The police would've had his name from his initial call to them, and a random officer from the Sheriff's department would not likely recognize that name as the creator of Bitcoin. Just saying.

> Though Nakamato's identity was a source of speculation since the launch of Bitcoin in 2008, an article in the news magazine Newsweek by Leah McGrath Goodman, published March 6, 2014, made the case that his true identity was Dorian Prentice Satoshi Nakamoto (born 1949), a Japanese American man living in California.[8]

Completely irresponsible to put a picture of his house in the article. I mean, she didn't even blur out his house number. It took me a single google search to find his full address with that number (matching street view).

With this out of the way, maybe cryptocurrency can focus attention on leveling up protocols and systems to improve utility. When bitcoin becomes the Friendster of cryptocurrency, Satoshi won't matter, just the disruptive ideas around our proxies for value and the new tools and power that can be used in positive ways to help improve the lot for all humans.

People want the confidence that they are able to securely accrue and employ the value of their efforts and wisdom to improve their standard of living. The values of the mainstream of humanity will determine the fate of this stuff. The current level of technical acumen required to handle and secure most any crypto$ is too high for them right now.

I'm not up-voting this because Leah Goodman has violated even the most simple of journalistic integrity that should be afforded to such a sensitive topic.

Firstly, she very dubiously breached Nakamoto's trust by attempting to get through to him by talking about his passions. Then, when she didn't get the response she wanted, she posted this article that lists multiple family members' full names, most of Nakamoto's (if this is even the real Nakamoto) personal and employment history, and then has the audacity to post a photo of Nakamoto's house that is close enough to a google street view photo, enabling others to pinpoint his location.

If something bad happens to Nakamoto as a result of the personal information disclosed in this report, it will be a great shame for Newsweek.

I wonder if Leah McGrath Goodman would like photos of her home published and members of her family identified against her will? I wonder if she thought about that, or the man and his family's safety, before choosing to publish this information about him?

Reading the description of the man and recognizing the value he placed on privacy and anonymity, I'm genuinely sad for him. I also fear for his personal safety and that of his family for the reasons others have stated.

I thought that bitcoin as a whole would be badly shaken at the second Satoshi touched his coins. What if, now that he allegedly has a face, he could have allegedly legitimate needs to spend his coins on?

I would be worried if I was the reporter. If anything happens to Satoshi, I suspect there are a moderate to high number of people who will make this reporter's life miserable as retribution. I'm thinking of all the bs that Krebs has to put up with.

Seriously irresponsible reporting. Not brave, not necessary, not helpful, not interesting, just stupid.

Goodman writes: "Two weeks before our meeting in Temple City, I struck up an email correspondence with Satoshi Nakamoto, mostly discussing his interest in upgrading and modifying model steam trains with computer-aided design technologies. I obtained Nakamoto's email through a company he buys model trains from." This is so sneaky and sad.

If Nakamoto ever sells his Bitcoin fortune, he would likely have to do so at a legitimate Bitcoin bank or exchange, which would not only give away his identity but alert everyone from the IRS to the FBI of his movements.

I think they just did that.

Amazing that he actually used his real name. This tells me that he didn't realize how far it would go when he started it.

On the bright side: if keeping his anonymity was Satoshi's main reason for not touching his BTC fortune, now he and his family will finally be able to use all that money and take benefit from it - well deservedly.

Very few people here seem to be discussing the fact that the article offers little real evidence that this is the Satoshi Nakamoto of Bitcoin, and that most likely they just set up an eccentric old man with an unfortunate name collision to end up getting mobbed by the public.

Well it's too late to get any points for this inference now, but I'm going to claim that there was a strong clue that the author of the PDF was old: the bitcoin paper cites "An Introduction to Probability Theory and Its Applications" by William Feller. This is a classic, from the 1960s, but I don't think it's very well known among people under 40 (correct me if I'm wrong).

There's one thing that doesn't add up: why would such a privacy conscious man use his real name on a project he thought might be illegal? If he was so serious about his privacy, he would not have used his real name in public.

The biases of this article aside, he sounds like a very interesting man. It saddens me that the way we found out who he really is was by a very gross invasion of his privacy. A sit down interview (in person or virtually)would have been much more interesting. I would have liked to have known eventually, but not like this

While off topic, i found this interesting bit of information on the a10's gun wiki page.

"The recoil force of the GAU-8/A[16] is 10,000 pounds-force (45 kN),[3] which is slightly more than the output of one of the A-10's two TF34 engines (9,065 lbf / 40.3 kN each).[17] While this recoil force is significant, in practice cannon fire only slows the aircraft a few miles per hour in level flight."

The gun firing produces more force through recoil on the plane then is produced by one of the plane's engines. That is simply amazing.

My dad has always said that the A-10 is an infantryman's best friend. an F-16 or F-18 will straff over the battle field and is gone. an A-10 will just hang around.

When I was younger we went to a nature preserve that is adjacent to the gunnery range at Moody Air Force base. We went up in an observation tower overlooking the preserve and watched A-10s do strafing practice. The sound of the GAU-8 main gun is something you have to hear to believe. If bad intentions have a sound it's that gun.

The A-10 is a cold war designed attack jet to be used to take out Soviet tanks. Its really good at slow (relatively) , guided, precise air-to-ground strikes. I think it would make a good candidate for a new class of a drones fleet.

A joke I heard is that if Air Force was allowed to buy whatever plane it wanted, every single one of them would be a single seat jet fighter that goes very fast. No cargo plane, no helicopter, no tanker, no CAS plane.

Air Force should just hand over A-10 to Army, the ones who really know how valuable A-10 is.

When I was a kid I saw this demo tape and was blown away by how lethal and intimidating the A-10 looked. Ever since I've been fascinated with it. The video quality is very poor because this was shot in the late seventies or early eighties but it demonstrates the ferocity of the plane pretty well.

I'm not familiar with all the font libraries out there. I use Font Awesome right now, and quite frankly it's nice for being free, but has limitations in other regards(like only being pixel perfect in multiples of 14...etc) It seems like a great alternative to what's out there (Different details at diff resolutions, internal colors being changed).

HN confuses me more and more every day. Upvoted to #1, but 99% of these comments aren't constructive.

I feel I am perhaps being too pessimistic, but I fear this trend toward this heavier, unused-feature filled web-- a mosaic of libraries that make web development or design better in X and Y way that really doesn't affect the end user that much, but largely increases the cost and speed of viewing a webpage, especially in places and countries without the high internet speeds the developers inevitably have.

How many icons will a website need before an abstraction like this is necessary to manage them?

If every icon has each element labelled with large prefixed classes (".iconic-camera-slr-lens-release"), this is going to be a lot of extra footprint for websites that have enough icons to make this useful.

Perhaps I am alone in thinking that colourful icons are somewhat noisy, and thus will be used only in designs where icons are prominent elements, and as such infrequently; with that frequency, they could even be individually coded for.

I'm using Streamline on my current project ( http://www.streamlineicons.com/ ), and I'd recommend it over Iconic if you're looking for a commercial-grade icon solution - it's more expensive, but well worth it. 2 sizes, separate resources for filled vs outlined which are well thought out and involve more than just "filling in" the outline version. No SVG, but various vector formats which can easily be exported to SVG via batch tools. And they seem to really be into supporting and extending their product, every update has been free and I've been notified in a non-spammy way.

...and no, I'm not affiliated, I've just had a very pleasant experience working with their stuff, IMO for a commercial product it's well worth the investment.

I'm not a UX designer, but do you really want your icons to change detail based on display size, rather than have a uniformity in display across devices? Wouldn't that increase the burden on the user to memorize more icons that they have to potentially interact with?

I know the detail scales on the icons are subtle, but intuitively I'd think it might make a difference.

I personally backed this when it was on Kickstarter for $35 and do not regret my decision. I have used FontAwesome in the past as well as a few other free alternatives. While the javascript-less-ness of FA is nice (Iconic has a webfont), the quality of the icons themselves and their level of customization comes no where near Iconic. Many people have mentioned the multiple colors on 1 icon but I haven't seen anyone talk about the ability to easily theme all icons with just a couple lines of css, which makes the multiple color thing more appealing.

Additionally, Iconic is available as Webfont and PNG if so desired. They are also working on a number of additional features that I find interesting (bottom of the features page https://useiconic.com/feature-index/), specifically ExtendScript for Illustrator and then generation via Grunt.

I am pleased with my $35 purchase and have no reservations about paying the $99 for a commercial license if it fits the project (like any icon set you choose!)...However, for those who haven't had a chance to try it I really wish there was a cheaper/free option for experimenting.

Kickstarter backer here. Funny how critical everyone is as though there is no demand for something like this. Well, over 2000 of us disagreed before iconic was even delivered.

I've used Fontawesome, I've used Entypo, I've used Weblays, I've used the original Iconic. I think that this offering is a step above than all of those, especially in the web category (I use them in mobile too).

The only mistake here is the licensing, which I hope will change. It was not clearly stated during the Kickstarter campaign, and is actually against my expectations (though it appears some in the comments section had discussed this).

IMO, it should be non-tiered and unlimited commercial use.That aside, wake up, this is useful.

Requiring JavaScript to view icons seems like a major downside compared to competitors that are plain font files like Font Awesome. Now the client has to download and execute the JS before seeing what might be important UI cues.

Forgive my ignorance ... What are the advantages of using icon systems such as Iconic & Fontawesome as opposed to using Unicode character codes. Maybe not all various icons are available in Unicode? And Unicode is geared towards language? thoughts?

Our designer would consider it, if the visible license options were not so restricting. Rather than make it "Limited to 1 commercial project", they should add a reasonable license solution that allows a team to use it for many projects.

interesting but you lost me on the home icon. '+' shaped window with a door preferably with chimney - the iconic image that most people recognise on sight as a house. not some weird triangle on a square with an inverted v on top which maybe kinda sort of indicates a roof... but why is it separate?

I just realized for the first time that I'm apparently using F.lux differently from all other people. For me, it's about making the color palette more compatible with the lighting situation in the room. I'm not into all that circadian stuff at all.

I love the new features, but I'm not wild about the software calculating the "night-time-but-not-bedtime" duration for me. Though F.lux seems to go into the opposite direction, I would prefer more configurability not less - for example letting people set the transition times themselves and enabling them to have as many lighting modes as they want.

One of the best utilities ever. What I would do to get this on iOS devices. And if you guys feel like monetizing, throw up a donation button I'm sure you'll have transactions ringing nonstop. Thanks for the amazing utility you've created - you help us work better and sleep better.

I just witnessed proof that I NEED flux - I turned it off to download this update, and it felt like my eyeballs were stabbed with a blue knife. The difference was shocking. I don't know how I ever lived without it.

I had the previous version installed on my mac, and kept seeing sporadic issues with my mouse cursor jumping a couple hundred pixels at once when moving it side to side. Finally disabled F.lux and the problem went away. Anyone know if the new release fixes that issue?

There's a new trend I've noticed recently in the software industry behind research driven development.. there's another link on the frontpage about reading software by a startup called Spritz http://www.huffingtonpost.com/2014/02/27/spritz-reading_n_48... that has somehow managed to get >300% improvement on reading speeds just by taking eye scroll out of the equation. I'm excited that we've reached the point where we've started questioning the fundamentals of our user interfaces, and I'm surprised how easy the switch over to this next-gen of design has been. I expected the process to resemble the painful switch from Querty to Dvorak, but it's been more creative than that.

The new version number is 26.0. Im noting this because when I first tried to install the program by overwriting the version in my Applications folder, it was still my old version (23) that ran for some reason. If you dont see any difference after installation, open About f.lux and make sure youre on version 26.0.

I've been using f.lux for I think about a year. Honestly I think it's just a placebo and I haven't noticed any real effect. My sleep schedule is terrible. I just feel I should comment because all of the only people commenting are those that did benefit (or at least believe they did.) The comments are not an accurate survey of how many people really did see an effect.

I am happy with Redshift, as with everyone else finding f.lux on Linux buggy. F.lux is missing the boat on a lot of developers I'm guessing :)Mac people don't work nights anyway, when Starbucks is closed, so I don't see the point..

I was just turned on to f.lux recently and I can't recommend it enough. I find the affects to be really noticeable and positive; working during the night is much less abrasive and I find the transition from screen to bed to be really smooth.

I love that something so simple can have such direct, physical ramifications.

I used to sleep in a room on the roof and leave the door open. The sun would be facing me just when it's up and I'd wake up early. It was great.

But even when I changed room, I didn't close curtains or something, so the sun would directly be in my face when it's up, and I'd wake up and start the day..

But a lot of the time, I'd be up before the sun going up (up by 4h30, work out, take a shower, eat breakfast (steak, eggs, half a liter of milk, some fruits) and start the day. I'd see people have low battery by 11h00 and I'd be throbbing with energy until the very last moments when I come home.

I drank a RedBull only once in my entire 26 years of existence, and it was only this year. I didn't like it.

I cannot live without F.lux on Mac and Twilight on Android. Can't wait for my orange shades to arrive as I have CFL lights in the kitchen, which I cannot remove and started to supplement with bioidentical melatonin recently. I've been using F.lux since it got released years ago, used Redshift on Ubuntu, and this release finally brings Windows features to Mac and I'm so happy! I've been ridiculed all this years for my reddish screen and most people ask: "What's wrong with your screen?" and they get, "No, what's wrong with yours?".

I didn't really "get" the purpose of flux for a while. I appreciated the sleep schedule reinforcement aspects of it but if you don't have a normal sleep schedule then it would seem to be less useful. That's always been the major selling point for f.lux forever, and it seemed so intrusive so I didn't use it.

However, I finally figured out the real reason for using it: white balance adjustment. The thing is, our eyes aren't just imaging sensors, they're active systems that continually adjust to ambient conditions. They do lots of things without us even thinking about it. One of the most important things they do is compensate for white balance. If you look at a white wall when the sun is shining on it during the height of daytime and if you look at the same wall during the middle of the night when it's illuminated by artificial light you will perceive it to be the same color in both instances. But in reality it's not, when lit by indoor lighting it's a very different color, but our eyes/vision system automatically adjust for the different spectrum of lighting.

The problem is that computer monitors throw a monkey wrench into this because they are independent light sources. White displayed during the day on a computer monitor is #FFFFFF, during the night it's still #FFFFFF, but this conflicts with the white balance of the environment. And that conflict causes eye strain and discomfort. At night looking at your monitor you might even perceive white to be slightly bluish, due to the conflicting white balance. By bringing the white balance of your display into harmony with the changing white balance of ambient lighting (as it transitions from natural to artificial) you get rid of a lot of those problems.

Hopefully with f.lux adding more configurability into their program they can make people more aware of these benefits regardless of sleep patterns.

Does anyone else have trouble understanding (or "intuitively reading") the graph in the f.lux beta preferences? I discovered that's a kind of "ego-centric" graph. I mean ego-centric just like there once where earth-centric (and later) helio-centric models of the universe.

Because the graph is totally ego-centric, the graph starts when you wake up. I just can't wrap my head around that. In my mind, I wake up at a specific clock time, and the universe is configured in a certain way at this particular moment. In particular, the sun has a certain position in the sky. (interestingly, I use an earth-centric model in this regard).

What's (relatively) constant for me is how the sun moves through the sky (this depends on where you live on earth, plus time of year). Obviously, it's beyond my powers to change the time of year. I could change where I live on earth, but I'm not doing that very often. What's directly controlled by me is when I wake and go to bed... Why can't I change these positions on an otherwise static "map"?

I don't want to express the current year as relative to my life either. I.e. three periods: "the time I hadn't been born yet", "the time that I live", "the time beyond when I died". It's rather insane. Yes, we use Jesus date of birth as a reference point now, you could say that it's bad and we should count from a different epoch or so, but at least things are not expressed relative to my life.

I don't know about any of the "sleep benefits" but as someone that works and enjoys being in front of computers 10+ hours a day, is great! As soon as I got it 3+ years ago my red-eye, eye-discomfort, dry-eye and strained-eye conditions disappeared! I can't use the computer without it (day or night)

Can I have a shortcut for disabling for an hour? Or maybe toggle the setting when I doubleclick the tray icon in windows? That would be really cool, I use the toggle so often and single double- click/shortcut seem so much better than two clicks.

Is flux "compatible" for people with day job and doing side projects after hours? You want to be sleepy when it's time to sleep, but you don't want to be sleepy when you're working on your exit ticket from bigco.

This is a really bright idea, in that almost all companies do an absolutely bloody abysmal job of implementing their checkout flow. The median testing budget for it is generally zero, unless you scope the population to "large, savvy ecommerce providers." I love the idea of being able to basically take advantage of the herd effect for optimization, and clearly there are non-linear advantages to the Stripe ecosystem, because getting credential/CC pairs into the system most probably increases systemwide spend on them and that is how both merchants and Stripe make their money.

I'm probably going to try this in Bingo Card Creator in an A/B test against my existing purchase flow at some point. I'll be honest: the likelihood of the average English teacher knowing Stripe does give me a bit of pause with regards to the UX and the prospects of my VA having to answer a lot of "Who is Stripe and why are you telling them my credit card number? Did your Googles get a virus?" emails. Still, seems like it is worth testing. Worse comes to worse, all you do is go back to the pre-existing checkout flow, like whatever Stripe.js integration you're using right now, and then you have full control over the experience.

I have seen and supervised successful redesigns of purchase experiences before. They print money. BCC got a 60% or so lift in purchases using a Stripe-powered checkout back in the day, after some hillclimbing, discovery of synergistic effects, and burning the kinks out of my integration. I think there's likely motivational numbers hiding in a lot of your businesses. You should absolutely be testing them on a regular basis yourselves, but this seems to be a decent stab at a way of doing testing without requiring focus/bandwidth or major traffic [+], which are two major reasons people give me for not testing.

[+] I have noticed many people suggesting "You could do per-account multivariate testing on e.g. whether the Remember Me button is a win or not", and feel obligated to point out "That will probably only work for accounts which are doing, minimally, thousands of transactions a month." The great thing about this is that if you've got only 2k visits a month and 40 purchases if we assume that systemwide performance is a good proxy for your performance (and n.b. that's an assumption which is tractable to measurement) then we can still get solid test results by using the other millions of visitors and hundreds of thousands of transactions flowing through the system every $PERIOD.

Stripe Checkout is nice, but unfortunately it's not suitable for us, since the "Remember me" checkbox cannot be hidden.

"Remember me" is confusing for users. What is being remembered? By whom? When you're dealing with users who may already be concerned about whether it's secure to enter their credit card number into your website, I feel like the "Remember me" box is just adding another layer of confusion and concern.

I'm surprised that the "Remember me" checkbox can't be hidden, given how focused on their customers Stripe normally is. The "Remember me" checkbox feels like something Stripe is pushing on me to help them with their business objectives, which isn't the vibe I usually get when dealing with Stripe.

The demo of checkout available at https://stripe.com/checkout uses a canvas element for the demo animation. It's a really well done walkthrough. Was it entirely custom-coded or done using a framework / tool to help?

We've been using Stripe Checkout at Humble Bundle for quite a while and it has been awesome. It is really easy to set up and once a customer has used it, it's incredibly easy to checkout in the future. Every couple weeks I hear about a new A/B test that is running to try to make it even better.

At CircleCI, we've been using Stripe Checkout for quite a while. It was increadibly easy to set up and very high quality (we replaced a hacky ugly checkout page with it), and it looks really professional. That professionalism is really important at the final stage of the funnel.

One of the things that's really interesting about Checkout is that Stripe is actively focusing on increasing the conversion rate for us. Their new layout (with the phone number) has a 20% high conversion rate than the previous version.

> We've been testing this for the past couple of monthsour hypothesis was that it would increase conversion ratesand we're delighted that it has been confirmed.

pc, do you know if the conversion rates increased for the majority of the subscription-based sites that you monitored?

Our company has a subscription-based service that uses Stripe Checkout, and some of our customers have expressed confusion regarding the "Remember me" feature. Even the CEO of our company expressed confusion initially, and he requested that I ask Stripe for the option of hiding the "Remember me" field.

From their perspective, there's no reason why their payment information should be remembered because they have no reason to enter their payment information again in the future since our service is subscription-based.

I think the "Remember me" feature would be less confusing at an e-commerce site where customers may make additional purchases in the future.

Also, we'd like to be able to hide the customer's email address in Stripe Checkout, not just disable the email address field.

So essentially, we want the old Stripe Checkout that only requested payment information.

Hot damn. The design and experience I felt from this page is overwhelmingly great. I've always loved stripe's design and they continue to blow me away. Really excited to activate our account any day now.

Interesting move by Stripe, and I guess it explains why WePay and Balanced choose to focus on the API and not their d2c offerings.

With the 'remember me' feature, Stripe has chosen to impede upon the territory of their developers, which greatly concerns me.

I love their product, but one of the reasons I choose to use them is because of the options that their API provides. Is this a back-end play to eventually cut out developers, or is it designed to help them sell more product? I'm sure Stripe staffers will say that it's the latter, but if that's the case, who is the primary customer for this offering?

I'll try to offer a slight variation on what others have already mentioned regarding checkout. Like many of them I find Stripe to be very well thought out and easy to implement.As far as Checkout goes, the idea is great but it might need some updates in order to make it more useful to a wider audience.As other mentioned, the "Remember me" function was enough for me to not use Checkout. It is confusing, perhaps because it introduces a mental shift in the user's mind, where out of a sudden they need to understand how this other company "Stripe" will magically keep their info across devices. A way to hide that field wouldn't harm anyone (other than Stripe's ability to do branding).It would also be nice to allow style customization of the form.

Now if they only did same/next day payouts. The founder once said this was possible if you emailed him. I emailed him and got zero response, from him or anyone else, so I'm guessing they are only doing this for super high volume merchants.

We set it up over here @Patreon and it was EZPZ. One issue that wasn't clear from the documentation -- the "custom" setup (https://stripe.com/docs/checkout#integration-custom) is preferable for so many reasons (and it's no harder to setup, not sure why it's not just the only option) -- it doesn't "take over" the form so that a credit card is required on submit and it also returns a bunch more relevant info like the last 4 digits of the credit card, the expiration date, etc. so you can save and display the card info for future checkouts.

First off, the entering of email addresses and remember me stuff is confusing for my customers. We sign up people for a free trial and take their credit card details before we sign them up as users. Even quite technical people have dropped out of the flow after signing in with stripe thinking "I've given them my email" and so people haven't properly finished the signup process because of this (I'm guessing I can probably get this email, however I'd still need to prompt them for a password).

The second big issue is that the constant changing of the form kept breaking various integration/acceptance tests that I had written. This was pretty frustrating as it seemed that I would get a different box from time to time and my tests would start failing.

I get the desire to A/B test, and the desire to build a network of users who have already given their credit card details (obviously amazing for mobile) but it would be nice for us customers if we had a flag where we could switch it off.

Have been using Checkout on https://deployer.vc and https://zoned.io - it's absolutely excellent: very easy to integrate, and looks really good. Will be switching over the other products as well over from PayPal.

I'm particularly happy that iOS Chrome is now a "first class citizen". There were some shaky times before where it (provided you saved your form) showed the mobile view that Safari gets; then where it failed completely (with a JS alert()); where it showed the desktop modal (okay, but a bit janky) and finally where it had a made-for-mobile modal.

I'm a big fan of Checkout otherwise: it's definitely simplified things for me. I'd just like to see more communication regarding changes: I discovered most of those myself from my staging site.

PC is this cross merchant? That is, if one end user of a Stripe merchant stores their card and then that same end user visits another Stripe merchant are they remembered? I see "Stripe stores your card for this site and others" or wording like that.

Multilingual support would be great, and also a more customer friendly interface for those who might not be familiar with things like CVCs. Those two things are reasons I had to stop using checkout and use stripe.js instead.

I still don't completely understand how Stripe can be so cheap. How do they pass charges onto payment processors without incurring some sort of fee that is not equal to the market rate for all other transactions? Is there some sort of fee scale on the processor side that decreases as the transaction amount increases?

We use Stripe checkout at http://leaddyno.com for subscription signups using the custom integration features of the checkout widget. We also use it in our app for customers to update their billing information. Its great they made such an awesome widget and ALSO made it very easy to customize and integrate programmatically! We love it!

I really want to use Stripe but it would be great if they had a more favorable pricing structure for microtransactions. Paypal, for example, will charge 5%+$0.05 or 2.9%+$0.30 (whichever is lowest) for digital goods transactions.

I am getting ready to launch a product. I was using Wepay until they eliminated their checkout form. Switched to Stripe, read up on the API, and implemented the form. Now you tell me their is a simple checkout widget available. Sigh.

I love the UX for the stripe checkout. It seems like the integration script creates a full page iframe allowing the widget to have full control over the UX. Is there any guide to building a similar full page iframe widget for other applications?

Stripe is pretty sweet, and we're in their beta to receive funds in two days. Any idea how they actually do this? Two is certainly faster that the normal seven days, and I'd love any insight or theories from the HN community.

I really like these guys. It's really no-nonsense hosting, which as a developer, is exactly what I need.

I've been (stupidly) running my website, VPN, and e-mail servers all on a single EC2 instance, mostly because I had a bunch of AWS credits. I got some Google Cloud credits, so decided to move it there. I then realized that I'm spending $60 a month on a single instance, which despite having "free" money, is stupid.

I split everything up into Docker containers, and run them on Droplets now. Sure, I pay $5/month now for each server, but that's fine. One of the e-mail servers is for my wedding; I'll turn it off when I don't need it anymore. The interface for bringing up new Droplets is simple and clean, and lets me do exactly what I need to, no more and no less.

If you look at AWS or Google Cloud, there are so many available services that it can be daunting to get simple things going. I mean, it's not that bad, but once you've seen DO's interface, you realize how unnecessary a lot of it is.

I would still likely use AWS/GC for cases where I need to respond to changing load needs, which incidentally, is exactly what you're supposed to use it for. A DO + AWS hybrid infrastructure would be most ideal IMHO.

Second, it's integrated. Which, to me at least, feels much more natural than AWS where you rent a virtual server, and then a database separately, persistent storage separately... Because it's integrated, it's also simple.

And they have a datacentre* in Amsterdam. Even two of them, right in the heart of the European Internet. That means latency to their servers is not noticeable in much of the EU.

I'm a (moderately [1]) happy customer. But I have to ask, isn't this industry slowly turning into just virtualized hardware leasing? After the management tools commoditize, and I think there's a solid risk of that, isn't it just price and DC-location that differentiate?

And in that vein, wouldn't the winner in each area just be the one who bought their hardware the most recently? Instructions/dollar are still increasing on each CPU generation, but it'll take more than one generation for each machine to pay itself off. So, whoever is closest to the current generation pays the least per instruction, and can charge the least.

Or, maybe it's memory/bandwidth, which are mostly commodity, but slightly bottlenecked by the hardware (e.g, max on a motherboard, NIC throughput). Maybe the combination of prices in cpu, memory, and bandwidth leave enough variation between competitors to keep the field a little open? I donno.

[1] Modulo concerns about their ssh key management. I haven't looked after the last news ping on it.

DigitalOcean banned me because I was using their server to fetch chromium's source code so that I could git-bundle/rsync it's 12 GB mammoth of a repo and download it to the third-world country that I live in (my network connection is really bad even though it's the best money can buy). Apparently I violated their TOS. As long as they limit their TOS to such narrow purposes as hosting a wordpress site or doing straight-forward things, I don't think they'll get too far. With AWS, amazon doesn't care if I spawn out a 1000 node render farm, as long as I'm paying, it's all fair game.

If you can scale your system using only 0.5 GB per node, you get more cpu per dollar since the 5$ and 10$ levels both have 1 cpu. Higher levels seem to be multiples of the 10$ level. Does anyone have experience with this in a production system with a lot of users? Are there horizontally scalable database systems that work well on many nodes with only 512MB each?

Only tangentially related, but when did DigitalOcean redesign their website?

I think my initial dislike is due to it being changed, but there are tons of minor usability issues that I never noticed on their old website.

I'm happy to see a view for new articles in the tutorials database [0], but at the moment it doesn't make any sense. When I hit it just now, an article from 11 minutes ago is above an article from 1 minute ago. Not only that, an article on the 52nd page says "less than a minute ago". From clicking around, it seems like some process has touched every article recently and all those times, and how they are sorted, are meaningless. Also, at the moment the new and tending view gives the exact same outcome, at least for the first page.

Despite generally rock-solid performance and uptime, I had a bad experience with DO recently. After experiencing repeated hardware failures on a node (with lots of downtime), I followed the advise of their support and did a snapshot and destroy of the failing droplet and immediately attempted to create a new one from the snapshot. It failed to build. I then tried to build again from the automated backup they create when a droplet is destroyed, this also failed. Support just did not seem to understand the issue I was having, I kept getting canned responses about doing a snapshot then building a new droplet from the image, so I gave up.

The entire site had to be created again from backups on a different VPS provider. Surely their system should be able to migrate any droplets off failing nodes automatically, I mean hardware failures happen right?

Classic TC title "to Take on AWS"... Don't make me laugh buddy. AWS is probably more than 1,000+ people operation with 30 different products and a marketplace, support and ops teams. DigitalOcean is purely a VM seller with no cloud or storage features.

I moved my personal Web hosting from another provider to DO a couple of months ago. I'm spending the same amount I was paying the other provider, and I'm getting a hell of a lot more for my money. (I have two droplets running right now, one with my Web server and mail, one running some network services...and I have plenty of capacity on both to do more.) Plus, since it's an actual VPS as opposed to shared hosting, I have more control over it. I'm kicking myself for not having made the jump earlier.

I would love to see DO or Linode do a S3 type service as well. I prefer the persistent virtualization of DO and Linode to EC2 but also want to use a nice quick persistent file store that isn't on my own slice.

I could just use S3 from Linode but that would result more paid bandwidth and increased latency.

I use linode and was drawing comparisons between two:1. 8 cores on linode is what binds me to it. Linode rules here2. Digital ocean is cheaper than linode3. More Network transfer in linode (minimum 2TB)4. Digital ocean offers more RAM5. Private network - Does not exist on Linode. Shame. DO Rules..

OpenBSD -- the world's simplest and most secure Unix-like OS. Creator of the world's most used SSH implementation OpenSSH, the world's most elegant firewall PF, and the world's most elegant mail server OpenSMTPD. OpenBSD -- the cleanest kernel, the cleanest userland and the cleanest configuration syntax.

Its simple. It costs half as much as equivalent providers for their VPS. Or less than half in the case of AWS. And it actually works even though its so cheap. No matter how rich you are it just doesnt make sense to pay double or triple.

The question is, do you really make money on $5 a month servers? I don't know if they actually are. The costs are for support people and now large numbers of engineers.

The thing is with that much funding it doesn't really matter if their income is greater than expenses. They can continue for at least another few years regardless. During that time sane people who just need a VPS will take advantage of it.

My recommendation for DO's business model is simply to set a precedent and make it a policy that if you pay only $5 then you don't get any kind of free support. That is the only real cost that sticks. So I suggest having a few different monthly support options available starting at zero support for $0 and up. That is the main business issue a provider like this has is the conflict between the desire to provide good support and the need to keep unit costs low. And the solution is to separate support out. The main challenge to doing that is sort of a cultural/expectations/marketing issue.

While not suitable for production operations, my go to has been a random one man vps shop. I have used him for years, because it is the cheapest plan I have seen. I pay $20/year per server, which makes it an easy decision to add another one whenever an idea comes up.

No Pooled BandwidthNetworking and Route, as well as capacity need some work. Linode is much better in this regards.No Custom KernelsIPs Problem. Still no deploy to different physical hardware by default.No Private Networking on most of its DC.

And possibly many other small things i didn't mention. To me most of those are deal breaker. And my problems with them is that are not fixing or improving these problem quickly enough.

While Linode's SSD are quickly approaching, and has none of those drawbacks.

I've been using DO for about a year, and I've been mostly happy with it but that's because the project I run http://www.mybema.com doesn't receive as much traffic or active users as I'd like it to. DO has gone down far too many times in the last year for me to be able to be completely confident in them with a 100x userbase.

We run our infrastructure on both AWS and DigitalOcean.1) DO consistently beats the price performance. 2) DO has simple pricing model --> No ondemand / reserved instances3) AWS is more feature rich but DO continues to add new functionalities like private networking and new data centers

Only their billing is a hot mess, mostly because they think it works and their customers are wrongly entering the CC #. For 4 months now, same problem and they have off-shore support that reads scripted answers. They just read the closest answer related to billing.

AWS is a cloud service provider with a huge ecosystem of services. Digital Ocean is a VPS provider.

It's like comparing a harddisk manufacturer to Apple.

Even EC2 is barely an overlap, since EC2 is a computation unit in the convenient form of a (very ephemeral) virtual server, not the virtual equivalent of an actual, permanent server. (And you're going to be in a world of hurt if you use them like that.)

I've been using DO for about 5 months now, and love it. I still host my main websites other places (dreamhost, who, despite some issues, has been consistent in improvement, and is fair in prices), and I use DO for stuff like mumble servers, a few games, as a ssh proxy from less secure locations, and as some as a shared shell with friends for various skullduggery and fun. Very impressed with DO's service and price, but even more so ease of use.

My main issue is that I would like a hardening script, instead of having to go through each new one I spin up and lock it down.

The most common seems to be to try and generalize, because relearning most of your job skills every few years starts to get annoying the 20th time you've had to do it. It's different when you are younger and everything is new, you just chalk up a major tooling change as just something else to learn. But when the next hot platform or architecture or whatever comes out you get tired of running in exactly the same place. You also start to get a long view on things, where all these new things coming out don't really seem to offer any advantage to you that keeps development fun. It's just more and more layers of abstraction and you start to see the nth demo of WebGL maxing out a 4 core modern GPU system doing exactly what you did 20 years ago with a single 32-bit core, 1/5th the transistor count and all in software. So how do you generalize? One word: management. You start to take over running things at a meta-level. You don't program, you manage people who program. You don't program, you design architectures that need to be programmed. You don't program, you manage standards bodies that people will be programming against. It's not a higher level, more abstract, language you go for, it's a higher level, more abstract job function. The pay is usually better and it's a natural career progression most organizations are built around. There's lots of different "meta" paths you can take. And because most of the skills in them will be new to you in your late 30s, 40s or 50s, they're at least interesting to learn.

The problem for some people is that these kinds of more generalized roles put you in charge of systems that do not have the sort of clear-cut deterministic behavior you remember from your programming days. Some folks like this, and look at it as a new challenge. Some hate it and wish for their programming days again. YMMV

So the next most common path is to just become more and more senior as a developer, keeping down in the weeds and using decades of experience to cut through trendy BS to build solid performant stuff. These folks sometimes take on "thought leader" positions, act as architects or whatnot. Quite often though industry biases will engage and they'll be put on duty keeping some legacy system alive because their deep knowledge of the system lets the company put 1 guy maintaining half a million lines of code in perpetuity vs. 10 young guys maintaining the same, who all wanting to leave after a few years to build more skills. The phenomenon is best seen as the ancient grey beard COBOL mainframe guys. Some people love this work, they can stay useful and "in the game", but some hate it because it comes with the cachet of being stale and not keeping up with the times. YMMV

Probably the third most common path is to simply branch out and start your own gig. A consultancy or something where you get to work on different things in different places on short engagements. The money is good while it's coming in and you get to make your own hours. At some point you decide to keep doing this till retirement (if you can keep finding work) or to grow your business, in which case you generally end up doing the meta-management thing. There are thousands of these little one-man development shops like this and I wouldn't be at all surprised if this is more common than third on my list.

Probably the next most common path is to just get out of development entirely. The kinds of logic, planning and reasoning skills, plus the attention to detail required to be even a half-assed developer, can be extremely valuable in other fields. Lots of developers go into Systems security, Business Analysis, Hardware, etc. With a little schooling you can get into various Finance, Scientific or Engineering disciplines without too much fuss. The money isn't always better in these other fields, but sometimes the job satisfaction is. Again YMMV.

I'm 60+. I've been coding my whole career and I'm still coding. Never hit a plateau in pay, but nonetheless, I've found the best way to ratchet up is to change jobs which has been sad, but true - I've left some pretty decent jobs because somebody else was willing to pay more. This has been true in every decade of my career.

There's been a constant push towards management that I've always resisted. People I've known who have gone into management generally didn't really want to be programming - it was just the means to kick start their careers. The same is true for any STEM field that isn't academic. If you want to go into management, do it, but if you don't and you're being pushed into it, talk to your boss. Any decent boss wants to keep good developers and will be happy to accomodate your desire to keep coding - they probably think they're doing you a favor by pushing you toward management.

I don't recommend becoming a specialist in any programming paradigm because you don't know what is coming next. Be a generalist, but keep learning everything you can. So far I've coded professionally in COBOL, Basic, Fortran, C, Ada, C++, APL, Java, Python, PERL, C#, Clojure and various assembly languages each one of which would have been tempting to become a specialist in. Somebody else pointed out that relearning the same thing over and over in new contexts gets old and that can be true, but I don't see how it can be avoided as long as there doesn't exist the "one true language". That said, I've got a neighbor about my age who still makes a great living as a COBOL programmer on legacy systems.

Now for the important part if you want to keep programming and you aren't an academic. If you want to make a living being a programmer, you can count on a decent living, but if you want to do well and have reasonable job security you've got to learn about and become an expert in something else - ideally something you're actually coding. Maybe it's banking, or process control, or contact management - it doesn't matter as long as it's something. As a developer, you are coding stuff that's important to somebody or they wouldn't be paying you to do it. Learn what you're coding beyond the level that you need just to get your work done. You almost for certain have access to resources since you need them to do your job, and if you don't figure out how to get them. Never stop learning.

I'm 41. I also worry about ageism but so far I don't feel that it has affected me yet.

> Do you have to go into management to continue progressing upwards in pay and influence? I know this isn't the case at some companies (e.g. Google), but is it rare or common to progress as an individual contributor?

That has not been the case for me. I'm currently doing software development for a startup - the same thing I've done my whole career. I do get asked to provide guidance and help for younger devs sometimes, but I don't mind that one bit, it's actually very personally fulfilling.

> Is there a plateau in pay? Is there a drop in pay switching jobs after a certain number of years experience because places are looking for 5+ instead of 20+?

For me, so far no. I'm currently making the highest salary I've made yet in my career. I've been here for a year and a half.

My age has not been an obstacle to finding a job yet; I've had plenty of interviews and offers over the last 5 years and have chosen the places I wanted to work, rather than the places where I had to. It's worth noting that I'm white, male and American, so I realize I'm less likely to suffer from workplace/interview discrimination with US companies than people in other demographics.

> Is becoming a specialist rather than a generalist the answer?

I'm pretty much a generalist web developer, I do backend and front end work, On a nearly daily basis I work with Ruby, Javascript, Postgres, Haml, Chef, CSS, Sass, Shell scripting, etc. I didn't have to become a specialist to get my job, although the fact that I've been doing Ruby for about 10 years did help me get it. I think the answer is, just to be good at what you do, whether that's as a specialist or a generalist.

> Are older devs not looking for new jobs because they have families and want more stability/are focussed elsewhere?

> What are the older people in your workplace doing?

I have two kids, 5 and 2. My coworkers are evenly split between man and women, are mostly in their 30's to 50's and most of them have kids too. A coworker of mine recently returned from a ~5 month maternity leave after having triplets, and we've been flexible about her work hours/conditions because we didn't want to lose her. So we're definitely not averse to having employees with families. I look for companies that have this kind of attitude to work at. It's not as hard to find as you might think; as long as you're good at what you do people will probably want to hire you.

I'm not sure to what extent my company is "typical" but you can at least count me as one "older" developer who is happily still working as a developer, was able to have a family without harming my career, and didn't get pushed into management.

All in all I would say, your early 30's is still young. Statistically you've got more than half of your life ahead of you, likely the best part, too. As we get older I suspect the demographics of our profession will change along with us, and there will be more older people in roles we stereotype as being for younger people.At least that's what I keep telling myself!

I'm almost 57 and still write real code that people use and employers make money from. The trick is to continuously learn new stuff. My whole career has always been spent at the leading edge of whatever was most important at the time. Sure, people sometimes don't want to interview you because they assume you are old and pointless, but that's usually when they don't even read your resume, blog, linked in or whatever you have. There are people who think that way, and there are people who recognize ability and experience matter. The trick is finding the latter while trying to avoid the former.

Some people don't learn anything new and become obsolete, or become management, or even have to start over away from programming. It's not easy to stay out front but you are the only one who can do it.

I've never done any 'IT work', and I've focused almost entirely on product development, over my 16 year career.

As a salary, I think I have plateaued at 160K, which is good enough for me. With 'adjustments for inflation', that's usually an extra $5K increase per year. There are people who make more than me, I know. For example, a guy I work with probably makes $200K (and he doesn't have a college degree).

There are always 'business problems' to solve with software, and there is always software to maintain. A lot of software never 'ends' - it just keeps going on, or dies dramatically, replaced by something similar. There's never been a better time to be a developer.

At a certain point, you'll have to become something like a 'manager'. For me, this is more of a 'tech lead' / 'architect' sort of role. I'm responsible for the quality, functionality, road-maps, integration, etc. I'm responsible for understanding the business domain, in and out. I'm responsible for managing the parts of the system, and ensuring that they all work together. I have to lead meetings, give presentations, work with the field and customers.

However, all of that is a small part, for me. I still code a good 85% of the time.

I get somewhere around 10-15 recruiters contacting me per week. So, I believe the job market is hot. But, I am really comfortable where I am. I work from home, and I run an entirely distributed team. We meet in person, when we think we need to meet. Things go very smoothly, because we're all experienced devs, and we fit together culturally.

I'm far from an 'amazing dev'. I don't have a slick github account. I don't run any important open source projects. I just know how to do a lot of different things, I am very efficient, and I have a great track record for success. I know on any given week, hundreds of thousands of people use software that I had created, and that makes me feel good.

I'm about to turn 53. I spend most of my day coaching younger programmers at Facebook (because they're almost all younger). We pair program and talk. I work on speculative projects, some consumer-oriented, some programming tools and some infrastructure. I also research software design and the diffusion of innovation.

I took a 10 year excursion into being a guru, but I'm technical now and intend to stay that way. I love programming. I've never been a manager. I suppose that capped my pay, but I'd rather be satisfied with my work. I haven't noticed a pay drop with age, but my experience may not be typical.

The most important factor for me has been to keep coding. It gets harder. I have noticed a definite drop in my long-term memory, concentration, and general cognition, but I compensate by being better at picking important problems, being able to pattern match a large library of experiences, and not panicking. As Miracle Max said, I've seen worse.

I started learning Haskell a couple of years ago, and that has really helped expand my programming style. I still don't like it, but it's good for me. I'm also learning React and the reactive style of coding UIs. That's also a brain stretcher.

I'm older than you and I've been looking for a new developer role recently. The main problem I see is that there haven't really been "old web developers" in the past - I've got about 15 years experience which is pretty much as much as it's possible to have in the web industry. People with more experience tend to be "software engineers who wrote web things" rather than "web developers" per se. Employers have expectations that web people are young people and as such building web software is something that you can only really do at the start of your career. The assumption is that if you have a lot of experience you'll quickly get bored and move on. Consequently it's getting a lot harder to find a job. I suspect that once we pass 40 we'll all have little choice but to move in to a more business analyst or management style role, or go freelance, until the industry is mature enough that age isn't something that works against you. A shame really.

I honestly don't understand this, I'm sure there is ageism, but when I'm reading resumes and interviewing I love the more experienced developers! The few times I've seen a solid 10 years experience rsum followed up by a solid phone screen it has been a feeding frenzy. So much so that my company usually can't even move fast enough to get an offer in the ring :(

On the other hand, it's really obvious when someone has 10 years of experience staying invisible and just hanging on. Those are the ones that get ignored in my experience, and for good reason.

Either way, I'm semi-retired. I do client work (iOS and experiential retailing installs) for about half the year, then I do my own projects for the other half.

I live in Vietnam but commute to NYC for certain client projects, so maybe 25% of the year there, the rest in Vietnam.

Prior to the move, I did 20 years focused mostly on new media/creative tech so my skill range crosses through design to code. This is pretty rare in NYC, so it's never been a struggle finding work, age has never come into the equation.

It's probably an arrogant assertion, but if you are exceptional at what you do, none of this nonsense about age will matter at all, so one should always strive to be exceptional in their careers. For me, that's involved 18 hour days, 7 days a week of working, learning, exploring, making mistakes and maintaining a healthy curiosity about how things work. Every piece of software I see, or motion graphic I see, I am constantly deconstructing in my head.

But I've worked with a lot of dudes that treat this as their jobs, and those guys are on a trajectory I don't understand, so maybe I'm not qualified to comment. I suspect if you're mid-level or worst, or that is the most you've aspired to contrary to talent or skills, you'll be set to pasture at some point.

The great thing about this move to Vietnam is that a single day at my day rate pretty much pays for an entire month of living here. So those months I'm not doing client work, that's a shit ton of free time to throw myself into technical and creative challenges that you wouldn't normally encounter working on projects for others.

As an example, I've always been fascinated with the tablet as a publishing platform but have always felt the current toolset a (adobe dps and magplus specifically) are glorified PDF generators that completely ignore the unique user experience properties of the device. So I spent a good six months in Vietnam working on the problem. And now I have publishing platform that eclipses Adobe DPS on a lot of different levels. I also publish a digital only fashion magazine here in Saigon (eating your own dog food). So life is kind of random.

I'm 40 and still actively developing software; I work with another developer who is in his early 50s and a bunch of people in their late 20s or early 30s.

I've not seen a hard plateau in pay but there's definitely a certain amount of soft leveling off in terms of percentages -- early in your career it is way easier to find a new job with a 50% pay increase, once you get into 6 figures that obviously becomes increasingly harder to repeat.

The only pay drop I've had was voluntary, to work at a startup I wanted to work at more than I wanted to maintain the pay I was making previously.

I think you can remain a generalist if you "specialize in being a generalist". My current job is doing Android client software development, but at home I code mostly in Go (servers, camera control systems, embedded Linux GUIs, etc) and I am still constantly learning new tech, new languages, etc, and still enjoy playing with technology in general seemingly much more so than even my late-20s/early-30s coworkers. Just built a RepRap 3d printer at home, have been learning about camera lens design and creating some custom lenses for my cameras (relatively basic Double Gauss designs with 4-6 elements at this point), etc.

I still code every day, but my understanding of what is important has changed a lot.

I care much more about the solution as a whole than the technology. While the technology is important, most clients care more about correct results. From the business side, nobody has ever tell me "Thank God you used TDD over Angular with a no Sql database". But on the other side, I have seen software that crashes every other time they run, but big companies still willing to pay in the 6 figures to use, because when it runs, it solves a very complex problem for them. So understanding the whole solution, and why is valuable, has become much more important. And that is what has kept me as a valuable individual contributor.

I went into management for a while, found a few cultural differences, like that Indian woman are way smarter than most of team members. Also with younger people, some of them need to be professionalized before they can be fully useful, once I got one that sustained that being late to work because he was drunk in a party the previous night was a reasonable excuse because he was the king of JS in his shop. Didn't last 6 months.

Nobody can guarantee you any pay scale, you make your own profession.

Family becomes a big factor, so job jumping is not something to be proud of, even as a young professional, it can be easily read as lack of maturity, and it plays against you in your resume.

Specialist vs. Generalist. There is room for both, but just be careful that you don't become specialist in a passing fad. Is better to accumulate specializations, so you become a well rounded generalist

Today I am coding in 3 different (but business process related) projects. I am part of the "think tank" that design the mathematical models behind the different products; and also work with the rest of the senior team on how to bring the energy of the younger people to a more self disciplined and productive place. We are finding that too many people think that "loud and opinionated" makes them noticeable, but the truth is that we cannot put high value products in the hands of the frat house king (to put it in stereotype terms: the bullied geek in the school probably has many more chances than the high school quarter back)

48, working at HP. I code every day and also get to tinker with embedded systems, optics, lasers, sensors, etc. Every day I can't believe I'm getting paid so well to have so much fun. I do keep up with the latest technology in my field.

> Do you have to go into management to continue progressing upwards in pay and influence?

No, like many corporations we have a dual path system although one level up from my senior engineering position I would have to do some visionary stuff, which I'm not good at so I'll probably stay at this level. Pay is not directly linked to position here.

> Is there a plateau in pay? Is there a drop in pay switching jobs after a certain number of years experience because places are looking for 5+ instead of 20+?

Doesn't seem to be the case here. I could imagine switching jobs gets trickier in your 50s because hiring someone new at high pay appears riskier.

> Are older devs not looking for new jobs because they have families and want more stability/are focussed elsewhere?

Yes, major issue with two kids in middle school and good benefits at current job. Planning on being more flexible in a few years...

> Is becoming a specialist rather than a generalist the answer?

I don't think so. As an engineer I think it's always good to have a balance between a specialty and a broad base. I've benefitted more from learning new skills but having a specialty is often good to get a start somewhere.

> And lastly: if you're in your late 30s, 40s, 50s, what are you doing at your job? What are the older people in your workplace doing?

Boring stuff: working with outsource vendors and CMs, working through regulatory issues.

Surprisingly, there's almost no corporate training and bureaucracy left. I think first all that stuff was outsourced and then we decided that our vendors were too expensive and just got rid of everything. Win!

I'm hitting 40 this year. I've been a professional in the tech field for almost 20 years, and a hobbyist for another 5 before that. I am completely self taught. I never took any computer science courses.

I have done pretty well in the field. I eventually focused on data warehousing and business intelligence. I worked for a startup that was recently acquired by a huge company, another startup from early on, and the highlight of my career was working on the Metrics team at Mozilla. I eventually accepted a management position in that team, but after a few years, the stress was getting to me and I missed coding so I switched back to the technical track and I'm doing software architecture at Pentaho, a business intelligence tools company.

I live on the east coast, I work from home full time. I make a good salary. I took a very small drop in pay when I left Mozilla, but it would be tough for many companies to compete with the full scope of life and benefits at Mozilla, and I wasn't unhappy with the change. I am on the upper end of the pay scale, but having been a manager at a couple of different companies, I also know that there is still plenty of room for improvement, even staying on the technical track.

I like @bane's reply, although I feel that personally, there is an important distinction between the middle-management handing hiring, firing, performance reviews, and bureaucratic BS and the director, CTO, VPoE, or team lead where you are doing the abstract work he discusses. Maybe I just got unlucky or I didn't take advantage of the opportunities there though. :)

I would eventually like to move into a principle role, or maybe a director, but I personally have to be careful because I enjoy leading teams but I don't enjoy middle management. :) It is very possible that I might not hit that level because of my self-imposed restrictions.

I attribute my success to a ceasless passion for technology in general. I keep a notebook where I jot down any keywords or tech that I run across or hear mentioned so I can look into it in my spare time. I love diving deep into these technologies and understanding where they can be effectively applied. In most people's books that would make me a generalist, albeit within a specialized field.

I don't pull as many over-nighters as I used to a decade ago. I am more concerned about stopping work in the evening to spend time with my family. That said, I have never felt or acted like a "5:01'er", and I don't believe I would continue to prosper in this field in a way I want if I were to become one.

44, and yes, it's a concern on the horizon that there might be ageism - but so far I'm not seeing it.

> Do you have to go into management to continue progressing upwards in pay and influence

That depends on how much pay and influence you want. At some point, influence means managing. If not in title, certainly in actions.

> Is there a plateau in pay?Yes and no. If you stay in the same qualification range at a given company, your pay will stagnate modulo annual increases. Move up or out to improve.

> Is there a drop in pay switching jobs after a certain number of years experience because places are looking for 5+ instead of 20+?

There can be. If you can, trade the drop for something you care about that advances your career. E.g. 2 jobs ago I took a pay-cut, but that translated into being given the responsibility to build a new team from scratch. It was something I wanted enough to take the cut, and it was a great learning experience. I subsequently traded back for money ;)

> Are older devs not looking for new jobs because they have families and want more stability/are focussed elsewhere?

Can't speak for all - I usually pick jobs I like, at companies I like, for pay I'm OK with. As long as the job comes with growth opportunities, I don't look for new jobs because I'm enjoying what I do.

If I don't, I'll probably switch.

But yes, I've also settled down a bit more. I wouldn't root up my family on a whim and move to a different continent any more, unless it was a stellar opportunity. Or maybe I'm not settled down, just pickier.

> And lastly: if you're in your late 30s, 40s, 50s, what are you doing at your job? What are the older people in your workplace doing?

I write code, and am trying to move into a bit more of a lead position, because that's what I care about. In general, the ones who want to write code do so. The ones who want to manage do so. And we've got people that are significantly older than I am.

In short, I wouldn't worry too much about being too old just yet :) Just make sure you keep your skills sharp.

It seems hard to believe that I will be turning 50 soon, but I am still doing what I have done for the past 30 years or so. Every day I walk to work, get a coffee, fire up emacs, and start working.

Is there a plateau in pay? Sure, but programmers make okay money, so I cannot complain. If I want to work more I can sometimes do consulting work or teach a bit, but generally life is getting too busy for too much of either. I stay where I am because I love running up the steps every day to work, but really I've been happy in almost every job I have ever had.

My career has basically taught me that being a generalist in an age of hyper-specialization makes me very useful. Being able to code in many different languages and environments helps, but so does having domain knowledge in related fields (economics and statistics in my case). Softer skills like writing and public speaking pay for themselves 1,000 times over, as does having a sense of humor and a willingness to share credit and help out when the chips are down.

The older people in my place are doing pretty much the same things that I am doing, but a few a starting to wind down and think about where they want to spend the final days of their careers.

It seems way too early to start looking at my career in retrospect, but really I cannot imagine anything more interesting or worthwhile than the past 30 years have been in programming. It has been an amazing ride with more cool stuff then I ever imagined back when I was typing programs out of Creative Computing on my Apple ][+.

I'm late thirties, and I'm writing code every day and fully intend to keep doing so until someone pries my ergonomic keyboard from my cold dead hands.

Something to keep in mind is that this industry is aging and maturing alongside us. You can't use historical precedent for understanding unprecedented events.

My personal hope is that the software developer monoculture (young dudes with ancestors in Europe or some parts of Asia) will mature into the kind of diverse profession where people aren't any more surprised by a female coder than they would be by a female orthodontist.

I'm 36+ so I consider myself old. I am a tech lead in a "startup that gone enterprise" and write Java, Scala, and web.

Most of my friends are between 35-45, all fully employed with good salaries (mostly Java / Enterprise shops though, but also some cool startups / Googlers / Twitter / Amazon)

My take on this, both as an older guy, and also as a hiring manager is that for me merit and skill matter more than anything else, I'm completely age, race, color and gender blind. (I recently hired a 50+ years old dev who didn't work for 5 years, he was simply that good)

Good developers of any age will always find job, at least this is my theory.

Yes, there are 10X 1 year of experience people, yes there are people who as they grow older they have less desire to work long hours / cut salary (due to having a Family, this is legitimate) but I don't really believe that anyone out there will say no to a 40 years old developer if she is an ace. If someone does, then they are missing the talent and hurting their own companies.

I'm 100% unforgiving to skill issues, but in my experience, usually the older the candidate, the better they do, merely due to more experience.

I'm shocked how many people with BSc or even MSc in CS, years of experience, simply don't know how to code. I mean some can't code their way out of a paper bag. But this has nothing to do with age, the last thing I care about is someone's manufacturing date. really. it just doesn't make any sense to do so.

I'm in my mid 40's, still programming away but basically have been building and leading teams for a while. As it turns out I really like working on improving process. I've watched friends my age burn out and leave the industry. A lot of the guys that dropped out were people that weren't really obsessed with computers, but rather just chose that as their major - possibly for financial reasons. I can see how all of the minutia could get annoying, but I just see it as part of the business.

For me it's not just about building things anymore. It's more about what I consider - building things and doing it with style. Give me the time to plan an app, put together a team, predict our finish date and then build our system. My goal is to do it with the team feeling happy and proud of their work the whole way. No horrible crunch mode or last-minute heroics. At this point in my career that's what I aim for more so than just getting an app built.

I also like helping young people become proficient, reliable developers who know how to plan and maintain large systems. Young developers tend to have a lot of clever ideas and know the latest tools - but I have various skills that they lack or find uninteresting. So I don't see them as competition. I think young and old developers can really compliment each other.

As for salary, it's hard for me to say since I'm in year 5 of a startup venture that just hit the black last year and is looking towards being a profitable company. So whether or not I will ever be looking for another job is something that I'm unsure about. I've pretty much decided that I would like to manage larger teams - not because I have to but because I enjoy it.

I can't speak for the community at large but I can tell you my path, my plan and my worries about that plan.

I just turned 36. I had been a manager for seven years across a few companies, managing teams ranging in size from 4 engineers to 35 (five teams underneath me). I reached a point in my last job where I was spending 80% of my week in meetings and the other 20% trying to stay on top of what my team was doing technically. I found myself becoming less and less useful in the technical discussions as the team was building up skills in new technologies that I didn't have time to learn.

I felt like I was losing my ability to be an engineer and therefore my ability to be a good engineering manager. I was not enjoying any part of my job at all. The rare opportunities to write code and learn new things were my only time where I felt good about the work I was doing.

So, I quit and got a different position as a senior developer. I told my new employer up front that I had been a manager for a long time and I wanted to be more technical again and focus my career on technical expertise. I my new position I am able to lead and set technical direction without being a "manager" in the traditional sense, people don't report to me but I help define what we're building and how we're building it. I am able to write code, learn, teach and explore ideas without feeling bogged down by management. My goal is to grow technically as much as I can and avoid becoming a manager that spends all my time in meetings again.

However, I am not sure how long this can last. At some point career growth seems to always steer towards doing less hands on and more managing of others, so perhaps i'll just need to find a way to enjoy that.

Now I'm the CTO of a start up and we don't have any older people (apart from the founders by a few months). About the only thing I can think of to say is to keep learning forever, as many different things as you can think of. With a background in development and that kind of mentality you'll always be useful to someone! :-)

39.5 here, have been a 1 man shop for 12 years. At this point, someone would have to be insane to hire me as FTE, and I would have to be insane to take it.

The money is better than ever, and I'm getting more and more interesting things to do.

One factor over the last 10 years or so (I've been in the game for 20 years now (yikes!)) has been having the experience to know which technologies to even bother messing with.

Far more important than the above, is to mentor other people, help people, and befriend everyone you can. It pays off in spades down the road when some C*O calls you up to lend a hand because he remembers when you helped him out a thousand years ago, trusts your judgement and skills.

Likewise, payback can be a bitch, so making "enemies" is not a great idea. Life is too short.

Go to management and learn to play politics. Sooner or later you'll have to. There will always be someone younger and cheaper that will be good enough for the not so challenging lob you have. You just can't compete with them. Yes, there are places where one can advance much longer on a pure technical path, but there are so few these jobs and places that it's just no realistical if you are not in top 1% both in technical or luck skills.

If you want more money, sooner or later you'll have to "take more responsibility" and "lead the team". While being on the management level just above the programmers, you'll still have some contact with the technical part, but when you progress further, you'll loose it and become the pure bean counter and look at other programmers as resources.

And you will hate that, but you still have this mortgage you have to pay, and to save for your kids colledge, maybe go few times a year on vacation, or you need to do that latest gadget as an impulse buy.

And with the time, you will hate your job, as much as anyone else at that position. You will start to question whether it was the right choice to become software engineer. But it was. You had some ten years when you liked your job and found it both well paid and satisfying, which is much more average person, even with a degree, can realistically hope to have.

/rant

Being in a similar situation, I had to vent a bit. I made my choice to switch to the dark side and go the management route. I know I'll hate it, but that's the reality where I live. I know I could get a few more years as a software engineer in Silicon Valley, but USA is among the last places on earth where I would like to raise my family. So, management, here I come.

I think anyone who is a seat-filler has a problem. But if one codes for fun, if one sees new technology and just DLs a tutorial and starts using it, if one is always thinking about how ones code can be tighter, such that every time one looks at ones code one rewrites it, one will be OK.

No need to stop coding to make money. Top coders do fine, and then one gets to code.

I am consulting for a company where I gave up a top spot so I would have more time to work on my startup: http://tiltontec.com/

I am sixty-two, have been coding head down on hard problems since 1978, on the Apple II.

I'm 38, and I work at a small company with 11 full-time employees. I'm tied for oldest. It's by far the best place I've worked, just in terms of general autonomy and not worrying about stupid stuff. We've also released a number of hit games, which helps people to stay relaxed, I'm sure.

I'm the eldest of 5 developers, but I think the youngest is 29-30. We're essentially all generalists, although we have individual specialties. A couple of guys have really deep knowledge of iOS strangeness or shaders. I've got some specialization in game AI and physics, as well as game design skills. Any task can go to any dev and come back with reasonable results. There's no hand-holding.

We're an iOS shop, so my day-to-day coding is in Objective-C, although I do a lot of tools programming in Python 2.x.

Generally, I get to do what I want, with some exceptions. There's a strong culture of just doing something that helps the company, without necessarily being tasked to do it. Taking a day off to do a research project is also tolerated when we're not on a really tight deadline.

Unlimited vacation and sick-days, within reason.

I don't see working for anyone else in the future -- I'd have to start my own gig.

There's some temptation to work for a Google, but at this point in my career it's getting a bit undignified to work for other people as an employee. I.e. I don't want to deal with your BS, unless you're a client (I can fire you).

I see a lot of "coding-centric" answers, but I think the most valuable asset older programmers have is their experience. So I would say, you go into "project lead" mode (which you could read as management, but I think of that hat as non-programming)

In other words, you sit in the planning meetings & your experience on past projects helps get over that "where do we start" mode. You make sure the proper QA and testing is being done, things like that.

Your day is filled with many other tasks than just writing code. You go home at 5pm & work on your private projects for fun & interest (not that 9-5 is uninteresting, but you don't have the time to "play" so much any more)

This has always bothered me since my early twenties: my Dad was a programmer into his 50's (albeit, as a manager too) but he'd actually risen to those rank from an engineering apprentice so it's a bit different.

For me, there's the obvious path into management but being good at your trade does not imply you'll be good at management.

I think there's a more subtle path too: consultancy. I particularly like consultancy because you can start off basically as a freelance developer and gradually raise your profile into project management (if you own a consultancy team) or architecture design or CTO-type problems. It's much easier to get away from the code whilst still avoiding the management trap.

Of course, that assume the need to move away from the code but I know I don't learn new technologies quite as well as I did 10 years ago and that'll only get worse over the next 10-20. Also, as you get older, you generally need to find higher-value activities and a monkey coder is not top of that pile.

Don't listen to what anyone who says that you can make as much as a programmer as a manager. The best programmers in the world with no management experience are going to cap at much less than a million a year in 99.9% of cases. Usually 400k or less. That's still good, and if you are happy with that stay a programmer! Just don't justify it saying that's the most you could make.

People who go into management literally have no cap in earnings. There are people who started as engineers and worked their way into senior management and even C suite positions. These positions can pay 7 or even 8 and in some cases 9 figures a year. The cap is much, much higher than you could ever make as just a programmer.

In in my mid 40s, went through some life burnout due to trying to start my own business, and had to come back to just being the most solid engineer I can be.

I do contracts almost exclusively because I have no faith in the employment market as an employee given the current trends in hiring.

Also I dont feel that being an employee makes me more of a team player, In most places contractors are doing the real work and employees are sitting around chatting over the water cooler. Id rather get work done.

Im a generalist and in spite of the rather idiotic statements about that in the first comment, its really the only way to go, if you are not a generalist you are likely not employable regardless of your age. Any shop that has hordes of 20 year olds spitting out HTML/CSS is wasting their time.

The beauty of being a generalist is that once you have enough experience and a core set of tools, you can add new ones or not at your leisure. The pace of things is really not that fast, about 80% of all tools that get released are just junk that noone will remember in a couple years.

One benefit of being an older developer, is that in a decent shop people tend to notch down the bullshit factor, because they know you have heard it before.

Conning people into doing things that are stupid is reserved for the 20 somethings.

There's not been a 'traditional' software career yet, so it's hard to tell if what's happening is what 'should' happen.

Thinking about this from a numbers standpoint, the market for people with software development skills on a truly national (or global) scale only really developed in the late 70s at earlier - I'd say not until the mid 80s did we see enough of an uptick such that the idea of a long-term career for large numbers of software developers was viable. With that viewpoint, we're just now seeing a ~30 year mark from the start of that time period - people who started in their 20s or 30s in software are now hitting their 50s and 60s. Watching and learning from what their careers have been will be instructive for people, although I'm not sure there's a whole lot of lessons we can draw conclusively from that yet. It's only one generation, and the world of tech changed dramatically during that generation.

Will this always be a problem? I don't know - embeddable bio-devices may be the next seismic shift, but "the internet" - the idea of billions of people always connected to services - this was little more than a dream in the eyes of a few people back in the 80s. Given that viewpoint, the career of software developers in the "always connected" age of the internet has been not even 20 years.

Unrelated, I've had pretty gray hair since my early 20s, and I'm not sure I've been too affected by ageism, but I know it's been a factor during some hiring - people assuming I was in my 40s or 50s when I was ... 31. :)

My story is, I think, atypical. I have no college experience, have been in IT for 19 years, and am now 42.

I was hired to my current gig 11 years ago to fill an emergency need for someone with perl and B2B experience, where I showed myself to be competent and approachable, and received a token promotion and consistent merit raises.

I have made all my IT hires in a similar way. In 1995 CompuServe had an immediate need for anyone who could tell a mouse from a keyboard, and due to my experience troubleshooting modem connections to play better dial-up Doom, I was put right into tech support in the ailing company. Before they imploded, I was hired at an ecommerce VAN to troubleshoot comm problems and write comm scripts for some of their software packages when they were very short on good comm help.

At each of the companies I've worked for over the last 19 years, I've dodged layoffs, demonstrated competence and agility, been given a single token promotion, and have been paid below the market average for my position due to not having a college degree.

Pluses: Haven't been fired, laid off, aged out, or put out to pasture. I have had consistent employment, taking only two contracting gigs over the years, both while still employed full time. Plus no one gripes that I wear jeans in a business casual environment, or that I look like a hippy with my 21" hair.

Minuses: Fewer promotions, lower average pay.

If I did the math of some of my peers who negotiated more pay from employers, but were then laid off during low profit years, I would either break even or end up in the black by comparison.

By showing competence, a sense of urgency, and willingness to keep an enterprise system healthy for the long game, I've done pretty well, plus no pesky student loans to pay off.

...but on the other hand, I haven't written that killer app, founded my own tech firm, or otherwise found my way to riches. As 50 gets nearer, and as I cost my company more, any of that may change. I fully expect within the next re-org or two to be handed a severance package, and then see if my secret project-x is a gold mine waiting to happen, or if I've been kidding myself all these years.

Software is a craft. Why would we stop practicing our craft as we get older? Do cabinetmakers stop making cabinets? Not as long as their hands can hold the tools. I'm 53 and still a working developer. Over my career I've worked with languages from 8086 assembler and Pascal to C++, C#, and now primarily Python. I am called on now to do more leadership, and my judgement is sought on architectural matters more than when I was in my 20's and 30's, but the primary skill remains my ability to comprehend a set of requirements, and from the infinity of potential implementations distill one that will satisfy those requirements in a robust and maintainable way. It's a valuable skill, and since it has been feeding my family for a couple of decades now I see no reason to let it wither.

My father is nearly 70, and still writes & maintains those horrible departmental VB+Access apps. He started in his late 40s, having noodled around with spreadsheets & databases since the 80s (from whence my fascination with this stuff stems).

Sadly, the world of VB & Access is so alien from my own that we can't even talk shop.

I was doing dev at a large utility company in my early 20s. I was only doing it a few years (~4 or 5) when the tech stack was completely overhauled and it required me to re-learn the new stack. I started to transition across (in my own time, at my own expense) and then the company decided to outsource the majority of the work and take on the outsourcer's stack. This left me with the choice to re-learn again within a very short time frame, or make a change.

I figured it was a sector in constant skills cycle and decided to get out of the rat race.

By my late 20s I was a business analyst - having the tech/dev background really helped.

Now, I work in security. the tech/dev/business background is invaluable.

In short, generalism seems to be the path (in terms of skillset), whereas you can specialise in terms of career direction).

Most old (50+) programmers I've worked with have spent most of their time in some variation on theme of management, only to occasionally sit down and write code when their unique expertise (Fortran, Lisp, Cobol, APL etc.) is needed, occasionally to great surprise. A few move on to start highly specialized consulting firms focusing on the sort of things the 'kids' don't know anything about (Fortran, Lisp, Cobol, APL etc.).

The only 50+ programmers I've worked with who where still employed as programmers as their main/only responsibility where those who'd been at the company since "the early days", had written and/or designed all the companies core systems and thus where the ones who understood the system better than anyone.

Developers should be growing to become bridges between business and technology. Businesses rarely have technology problems. They have business needs that technology might help solve. Even though most businesses are becoming software businesses regardless of industry, it's from the perspective of managing the details of their business.

Learning and delivering strategy is far more valuable than just tactics (latest hip language/framework/stack), because a solution doesn't exist just in programming alone, but a combination with policy and process.

As you grow, you can become a strategic aligner that is not dishonest about using the latest toy at the expense of your customer's growth.

I'm in my early 30's, developed professionally for over 15 years.

The one thing I see over and over now is how secondary development starts appearing the more I interface with upper level management directly. There is a major starvation for developers who can learn to understand a problem and leveraging a solution to magnify competitive advantage.

I spend more time thinking and analyzing the problems (way more) before ever daring to trivialize something to whip up some code.

This ends up with my development work being tremendously more valued, instead of just being a means to an ends. As I get older, the value I add is not just coding, but being able to architect a solution that

I'm in the "start your own gig" boat as far as people who have a useful skill set and don't want to learn an entirely new set of languages/frameworks/etc. I'm nearing my mid 30's have a consultancy, but simply being a consultant with a decent rate is a better option for making more money yourself without having to play as many corporate games (provided you have the discipline and tenacity to work well by yourself and stick with it).

The other side of that is creating products, which has been beaten to death here (look to patio11 for great inspiration and excellent insight), but it's quite relevant to this thread. It's somewhere between a massive amount of work and a crap shoot, but if you can figure it out and do it well, in my opinion it's the best of all realistic worlds for people in our position.

It depends on you. If you are one of those developers, who like butterflies fly from one framework du jour to another, then you will find yourself obsolete pretty fast. There is always going to be someone with more time on their hands to convert Spring to asm.js that runs in JS emulator implemented in Haskell.

If on the other hand you are interested in what is the business purpose of what you are doing, then you may have a long and rewarding engineering career ahead of you. Developers of 1st kind (butterflies) are dime a dozen. Second kind is much harder to find - someone who understands the business. I would recommend to specialize in business, but remain a generalist in technology (they haven't invented anything new since LISP and APL anyways). As a bonus, if you get sick of development or modern developers, then you can easily transition to business side.

Largely it depends on the company culture. As you approach your mid thirties these are the questions you should be finding answers to:

Does the company have a technical development path ? Do developers get promoted to senior developers to technical leads or is the organization flat (bunch of developers reporting to a non technical manager) ?

Does the company value employees with experience or does it assume that everybody is an idiot and only a select few can make decisions ? A good way to asses this is to look at how responsibility is spread around the org chart.

Can you see yourself working for the company in 5 years time, what about 10 years, what about 20 years ?.

The sad fact is that after 40 even if you are the best developer in the world changing jobs is going to be more difficult so if you can find a company culture that works for you this is vastly more important than more pay.

I am in a vertical industry, i.e. something else (energy) plus computer science. Lots of purely trained CS-types do not do well here because they dont understand the domain. My degrees from MIT and Stanford are in the domain.

Incidentally, Google has matured and hired some of my classmates. Facebook still seems to be more of a CS kindergarten.

My manager is at least 55+ (he retired, but came back because he was bored) - he writes code all day. My CTO is 50, he also writes code (though not as much as my manager).

From my (limited) experience, it looks like, as we age, we have these options:

1. Continuously learn new things - this negates the "old man" perception in the industry

2. Be good (not necessarily bleeding edge) in programming, but have good domain knowledge (this ties us to one domain though) - these kind of people are very valuable, as most programming jobs don't need bleeding edge skillsets.

There are three popular paths you can take as you get older: You can become obsolete and eventually find yourself laid off and unemployable. Or you can move into management. Or you can keep learning new technology, becoming better, more employable, and able to demand higher bill rates year after year.

To a large extent, you get to choose which of these paths you prefer to go down.

I seem to have personally gone down path four: Start your own business. Consulting, unlike employeeing, tends to map bill rate exponentially to experience. And selling software products... well, when was the last time you decided not to buy a SaaS product because the company's founder seemed too old? That's what I thought.

The big thing is that developers move to "adult" companies. The pay is less, but they get to act like grown ups, take vacations, have families... Sure the work is more boring. Who doesn't love EDI or Factory planning!!! But getting those business skills down and implementing what the accountants want is like 50% of "computer" jobs that mostly are never, ever advertized.

I'm 40 and still looking at more school and something to keep busy another 20 years after the kids move out.

I'm in my mid 40s. I have been coding my entire career and I am still coding everyday.

The startup I have worked for 3 year was not quite taking off. So a few months ago I decided to quit to look for something new. This is the first time I have quit a job out right without having a new job waiting. It turned out to be the best thing I have done. Once I broadcast the message that I am in the job market, my email box quickly fill up with requests (I'm in the San Francisco job market). I've spent the next week pretty much interviewing full time. Very soon I've received multiple job offers. The company I like the most did not make the highest offer. But I successfully negotiate up to a satisfactory level.

In terms of work and technical skill, I feel I am in the top of the game. I'm not sure where people get the idea that young person is better than more experienced. New knowledge often build on top of old knowledge. Fundamental skill like logic, math, data structure are just equally relevant. Plus experience is useful when you need to make judgment on where and how things are likely to change, and where things is more risky that deserve more attention for design and testing. That say the landscape of technical knowledge is huge and quickly expanding. There a new thing to learn everyday. I am aware that many people around me, both young and old, are really talented. There are always things I can learn from them.

In terms of pay, it is rising in absolute term. But I'm not in management and I'm moving mostly laterally. I don't believe I am making more than someone who are in their 30s. In this sense both my career and my pay is plateaued. But still I satisfied with the work and pay level. I think this is an excellent career choice for myself.

I'm in my late 40s and switched to full time management about a year ago. I didn't have to, it was a choice. I felt there was a management/leadership gap at my company. One I thought I could fill and do a good job. It wasn't about pay. Yes, the pay is greater. It is inline with my expanded sphere of influence.That said, how many managers have you seen doing tech talks at conferences? As a developer, that is one place you can expand your sphere of influence. Open source code is another outlet.I don't think there are limits to pay, or that it plateaus. There are less jobs paying 150K than 100K, less jobs paying 200K than 150K. There are probably more management jobs than developer jobs at the higher levels. So as a developer, the competition is greater. Good developers can get good pay, great developers can get great pay. Are you a good developer or a great developer? Google good vs great.I'm not looking for a new job because they are finding me. I keep my LinkedIn profile updated, I'm active on Stackoverflow, I open source. I actively manage my public profile. I find an online reputation is almost a requirement for the higher paying roles.

- You can move into management, but you have to keep your technical skills sharp. It is harder to find management positions than programming positions. Also, you cannot manage what you don't understand.

- I have stayed around 5 years on each job. Knowing the specific systems of company as well as the technology makes you very valuable at that company. However, you may be able to raise your salary if you move more often, but that has its own risks.

- Specialist or generalist? if you are willing to move it is probably better to be a specialist.

- I still enjoy coding, the trick is to think of it as a craft. The feeling of being good at something, is a big motivator.

I'm in my mid-30's and was very reluctant to go in to management, wanting to code as I have done since my first job at 16. In the last 18 years I was happy being a developer, but over the course of the last few years I have come to the personal realization that I would need to eventually move in to a different role - I didn't want to wait too long either. I recently got the opportunity to move in to management and don't regret the decision. Sadly, it's brought me to 100% management, 0% development - but I make sure to review every pull request, knowing exactly what is going on. Additionally, I still work on little side projects of my own at night/weekends - it takes care of the itch to want to code, and it's stuff I am really passionate about.

As for salary - I believe I was getting close to plateauing as a developer in my area (for jobs I would want to do), and I have opened up my career and salary path a bit more.

Regarding looking for jobs - I moved from the agency world in my early 30's to the startup world. I am so much happier, even with the perceived risk, I believe it has made me far more marketable for future endeavors. I got to work on far more interesting things, and the people I've met after making the switch has helped me tremendously.

My father's in his 60s. Formerly a Pascal/VBA programmer, he's found it very tough going over the last decade. 20 years ago, he was working for the London Stock Exchange but now scrapes by making (actually very impressive) complex Excel macros for local small businesses.

It makes me really sad. I've tried retraining him in web development, and he actually picks it up really quickly, but I doubt there'd be any work for him out there given his age.

If you walk into any Fortune 500 "enterprise" environment, MOST of the employee developers working on the core business systems are typically in their 40's, 50's, and up.

It's not as "sexy" as tinkering with this month's Scala/Node/Go/Rust/Julia fad... but when you get older and have family and other commitments, perspective often changes. A lot of guys just want to "get things done", and then go have a life outside of work. To be fair, most developers continue to learn new technologies and skills throughout their life. But the drive to always be on the bleeding-edge with your professional work tends to be a trait of younger developers and smaller companies.

I think a large part of the fear of age is that we don't see a lot of middle-age web developers. That is because Generation X was really the first generation for which web development even EXISTED during our entry-level formative years! So I'm not convinced that we will all simply vanish into management 10 years from now. Rather, I think you'll just see a lot of middle age Gen-X web or Java developers, with perhaps younger guys focusing on newer niches (e.g. wearable devices, VR, pure client-side JavaScript with little to no backend, etc).

Or maybe web development will become a more cross-generational field, with middle-age and younger developers working side by side. Hard to predict the future with certainty. At any rate, I'm about to turn 40 myself, and I stopped stressing out about my "exit strategy" a few years ago. I'm currently working for an exciting small start-up. I ENJOY being hands-on with the code... and as long as I maintain that passion and desire to learn, I find that my income and responsibilities keep going up. I'm sure that will plateau at some point soon, and maybe decline later in life if I choose to slow down a bit. But I find that I'm still highly employable among the employers that I want to work for.

Few weeks ago I got an email by a person who wrote a famous piece of software in 1971. He was given some software I had written and was asking me questions because he intended to make a few additions. My software was written in C#. I don't know what he used in 1971 but most probably it wasn't C. I have to admit that his questions made my day (or week). I learned that he is over 70 years old and programs every day. I hope I will be doing the same at his age.

I was a dev for 20 years at the same company - went from being an Assembler and BCPL programmer to C, C++, Visual Basic, learnt web stuff when that came along. Then the mid-life crisis hit ( well, more like my daughter was grown up so I had freedom to move ), thought about career changes and became a tester.A few years after that I moved from the UK to the USA and am loving my new adventure. Working at a small company as their main exploratory tester, working on several projects at a time, all sorts of domains and techs and still learning new stuff.

The devs I worked with for 20 years either stayed and stagnated with average pay rises every year, moved onto new firms to get a bigger pay rise, one went contracting then earnt a lot of cash and retired to be a farmer in Cornwall. Another dev retired with a nervous breakdown

I'm 39 and write code every day for my employer, but outside the scope of the engineering team. I guess it may fall under exploratory/architectural work for what might be future products, though if they gain any traction they will move into the engineering team, and I move on.

I guess this falls under the "more and more senior as a developer", but I'm outside the direct line of fire of bugs, deadlines, etc.

I can't complain, and I'm often working on newer technologies than the folks in engineering, keeping my relevance.

"I am a 20 year resident currently in the process of being "de-located", and will be leaving San Francisco in a few weeks, destination unknown. A little known secret about the tech industry is that if you're not in your 20s or early 30s, you are basically unemployable. It's a great gig for the kiddies, but if you're an adult with a family and responsibilities, you'll learn all about the magic of "at will" employment. Not to mention that many/most of these companies are run by financial criminals/sociopaths who could care less about anything other than lining their own pockets."

I'm a 37 italian computer engineer, i made my first super-easy assembly program when i was 7 and loved programming since then. I've worked for big companies and left them for a small company where i learned a lot. After 4 years i started freelancing. Now, 4 years later, i can say that my pay grew a lot during freelancing. I think it can still grow, maybe a 20 or 30 percent more so maybe there will be a plateau. But i love programming!!! I can't think of a management work. I need coding! I know that maybe one day i will not be able to learn new stuff as i was able during these years, but learning something is one of the best part of this work! I'm thinking about founding a startup so maybe my work will be marketing/management but also coding. But as somebody said, i will also try to learn something in machine learning field (my university thesis was about IT infrastructures optimization based on genetic algorithms).So try to understand what you want from your work life: money, fun, career? Then you'll exactly know where you will go.

I'm in my late 40s, and have been working as a developer since I was a teenager. Here's a simplified account of the last 20 years or so:

I spend 10+ years working for a mid-size company, progressing from developer to a sort of combination senior developer / IT manager. My salary grew at a reasonable pace. I was wearing a lot of different hats, and gained experience in a lot of different areas. That company went out of business a few years ago.

I then spent a couple of years at a small (12 person) web dev company. We had one in-house product and worked on various sites for various clients. Mostly ASP.NET, some Drupal. I took a bit of a salary hit there, making maybe 85% of my previous salary.

I left that company about a year ago, and am now at a fairly large company, primarily working on Dynamics AX custom programming, with some random ASP.NET/C# stuff in there too. I'm still not back at my old salary, from the company that went under, but I'm closer.

With a little more Dynamics AX work under my belt, I could probably jump ship for an AX consulting job that would get me back to that old salary. Or I could stay here and make a pretty reasonable salary, with modest gains, over the next several years. (There doesn't seem to be much room to move into management here, though if I stay long enough, that may change.)

Or I could try to go back to another web dev position, ASP.NET and/or Drupal, maybe. (That probably wouldn't get me much of a salary bump though.)

I'm not entirely sure what I'll be doing ten years from now. The company I'm at now is stable enough that I might be able to stay here until retirement, but I wouldn't count on it. I'll probably need to change jobs 2 or 3 more times before retirement. I try to keep my skills up to date, so I can stay employable, and, at some point, I'll probably start using the standard 50+ tricks on my resume: dropping my college graduation date, dropping the oldest jobs from the resume entirely, etc. And dyeing my hair maybe, if I get too grey.

This being HN, other people have of course talked about starting their own company. I'm not sure I want to do that, but it may become an attractive option at some point, especially if the health care situation in the US gets straightened out enough that I can afford to pay for my own health insurance.

I'm 50 and along with senior coding job I run my own side hosting company, occasional moonlighing consulting gigs and always interested in launching and trying little business ideas here and there.

I also get more and more interested in personal development.Getting through middle age, heavy swings of depression, emotional health struggles, addiction struggles are issues common to most, not only programmers or technical people.

Having overcome all these I've collected a set of very useful and practical personal improvement methods that I plan to gradually launch as my personal development business to help other people who are suffering from these issues.

It gets harder. You can continue to be an engineer, but it takes longer to find a job. (I've seen lots of openings for "senior software engineer", by which they mean "5 to 7 years experience". Great. I've got 25 years. So, you don't want me, even if you call it "senior".)

But there are some places that want more experience. My current job wanted someone to come in, take the central piece of a new embedded system, and not have to take time on a learning curve. They didn't have any problem seeing the value in 25 years of experience.

Does salary plateau? More or less. Salary growth tapers off after about 10 years experience, or so it seems to me. It still grows some, though.

My dad was programming right until his retirement. He always refused any sort of management role (though he occasionally got a lead role for a high-profile project thrust upon him). I know one of the things he worked with was Java, so it definitely wasn't all old stuff. He was working on an open source project in his spare time, though strangely that seems to have stopped since his retirement. I should ask him about that.

I'm 40 and still young. I'm learning tons of new stuff, developing into new areas, and started as a freelancer two years ago (which boosted my pay quite a bit). I still have great plans to work on ideas of my own in the future. No idea where I'll be in 10 years, but I bet it's something totally different.

We started a podcast... Grumpy Old Geeks. http://grumpyoldgeeks.com/ where we answer damn near every question you just asked in one episode or another. My cohost and I are both 20 year web vets in our 40's now so dealing with all that bullshit.

It started in 1999 during the Dotcom busts that flooded the market with cheaper labor sources.

Suddenly if you had a good job with a good salary some 20something working for $20K/year replaced you.

Unable to find work and provide for your family really wrecks the go. Most of my friends chose the suicide by shotgun route. I went to a lot of closed casket funerals and then got too depressed to go anymore.

My last job was in 2002, I thought I had a good job, but my employer only hired me to 'super debug' their main software that they hired these cheap labor sources for and they had a hackathon and prizes and none of them could get it stable or good quality and secure. So I got paid $150K/year and fixed it in two months, and then was fired even if everything worked great. I found that most job offers in my area are like that, promise you everything and as soon as you 'super debug' their problem you are fired.

Happened to most of my friends, and they ate a shotgun.

Some 20somethings on Internet forums kept telling me to eat a shotgun, shotgun mouthwash, etc. I refuse to kill myself and I will keep looking for work and bootstrapping my own side projects. I am glad Hacker News is not like Kuro5hin or IWETHEY or some other troll forums telling me to kill myself. You guys are professionals here.

My 2 cents... The great programmers can stay technical as long as they want. If they find their way to great companies, they will do well on equity. I have several data points of folks in their early to late 40s like this.

People who aren't excellent, or not truly passionate about the coding itself go into management, sales, or consulting. (I'm in this group) There is age discrimination by people in the open market who don't know your work. There is much less age discrimination amongst people who personally know you.

I'm 48. I do architecture, development, and mentoring mainly. MOST of my contemporaries moved into management and "VP" type positions (but to be fair I come from a banking/finance background so that's just What People Do There (tm)).

I have a number of colleagues in my current position (in the Internet Security domain) around my age doing the same as I, although the average age is lower to be sure.

"The problem for some people is that these kinds of more generalized roles put you in charge of systems that do not have the sort of clear-cut deterministic behavior you remember from your programming days", what could you be possibly talking about? So generalists work on non-deterministic systems? Clear-cut? Gimme a break man that statement and perhaps your whole remark is a load of bull. You are making a statement that generalists arent programmers. Generalists make more money than anyone else except for security specialists.

One data point: a good friend of mine joined [large rdbms vendor] out of college about 25 years ago and rose through the ranks in the rdbms engine group. He's now one of the senior people who knows where all the bodies are buried in the code, the forgotten bugs that resulted in the current weird algorithm in the xyz module, etc. I have no idea what he makes, but when I tried to hire him during the first internet bubble they slapped the golden handcuffs on him. These days he's rich from 25 years of stock options.

Your comp definitely plateaus if you remain an individual contributor. You become more valuable, and more highly compensated, by managing and/or mentoring people, helping evolve the technology to match the needs of the business and to find new markets, touching customers and revenue, etc. In my opinion this is required from all senior level technical people in the software industry.

The reason wasn't age as much as it was simply a desire to do more complicated stuff. To me the real challenge in technology has always been at the intersection of business and tech, that spot where you have people with a need meeting people with capability. The business side alone is pretty boring, and the tech side at the end of the day just amounts to variations on bits and bytes. Puzzle books. (Although, like I said, I love it)

Being a consultant, I see a lot of older developers around. I think there's a significant bias in the industry towards younger guys -- mainly because younger guys are the hotshots moving through development into management, and people like hiring people that look like them. [Insert long discussion here about age bias if you must. I prefer to just acknowledge it and move on.]

The "mistakes" I've seen from older developers come in two flavors: not specializing enough and not moving around enough. Some guys will "float to the top", and become more of a surface-level generalist. This is the path I see my own technical skills leading. That's great, but many times companies specifically want some kind of bullshit new technology because somebody thought it looked hot on HN. In that case, you're at a disadvantage. And after a few years pass like that, sure, you're the guy that can do anything, but only in C. That has real, solid, useful business value -- but it sucks to try to sell in the labor marketplace. I have a feeling there are going to be a lot of older startup founders over the next 30 years that fit into this mold.

The second way to kill yourself is to stay at one company, working on one product and one technology, longer than a couple of years or so. Pretty soon you're the master of C++11 as it applies to real-time embedded weasel-hunting robots -- in other words, you are truly the master of something nobody else on the planet cares about. That works great until they stop making weasel-hunting robots, then it sucks.

I think the problem with age as a developer is the same problem you have at 22: you have to wisely balance the time and energy you spend on learning new things. You can't learn everything and move around every other month, but you can't stagnate either. Instead, you have to carefully watch the market and anticipate where it's going to be in 3-4 years. As you get older, sadly, it's just easy to stop giving a shit as much as you used to. Sure, in five years everybody will be using X, but what will they be doing with it? I'll tell you what. In 99% of cases, they'll be doing the same kinds of things they're doing right now, that's what. So after a couple of dozen rides on the "Gee whiz! Is this cool tech or what!" wagon, it gets tougher to get back on again.

I think in the midwest you have a lot of banks and insurance companies and alot of programmers still using cobol. I know at my company we should finally get all the way off the mainframe in about 8 years not because it's better or cheaper but because no one will be left that knows hot to use it. I code in gosu which is specific to a product in the P&C insurance industry I see the company staying with that product for another 10+ years.

I'm curious to hear the answer to these questions from a woman's perspective. I met a few lady engineers through IEEE involvement in university who were further along in their careers, but I haven't met very many other lady devs over the age of 35.

I am 45. Been out of work since 2002. Nobody wants to hire us older developers they all want cheaper labor sources.

Even NASA has this find big asteroids contest for $35000 in prizes because they got bit by the startup hackathon of cheaper labor sources of 20something college dropouts instead of 15 plus years of experience programmers.

Fact facts most hiring managers hate older developers. Unless they want quality and pay a salary to support a family can't hire us.

Need to move to find work but my family don't want me to move. Given ops for Google, Amazon, etc but had to move to take them. Nothing for me in St Louis Missouri USA.

I am 45 and my current occupation is as an Agile Technical Coach. It does involve a lot of travel, so I take breaks by doing remote-pair programming to spend more time with family and to keep up my coding skills. On Saturdays, I am starting to teach in the Math and Software Engineering Academy for kids 12 to 17. I do work a lot of hours, but the mix of work makes it very satisfying. I feel blessed that I got into this field.

Fantastic points, I've been thinking about these recently as someone close to mid-thirties. I've been learning hardware development and low level hardware software design over the past year as I saw myself either needing management, a new industry or ... death. I find it strange and almost awkward to work on projects recently with cocky 21 year old versions of myself.

Late 30s. Recently took the plunge into management. I suppose I can manage. I don't love it. Can't say I recommend it.

At least when I was a developer, I could focus on the technical parts. If things went south, I could hone my skills for the next gig on someone else's dime.

I should probably be honest with myself and move into consulting and contracting before my skills degrade too much and I'm less relevant for it. I honestly don't care much for the politics of management, I'm not terribly charismatic, the company's processes are tiring and frustrating, and my team would probably be better served by someone who handles all that well. I'm scraping away time to hack when I should be taking care of the team.

I know it sounds a bit snarky, but for a lot of people it seems to be "go get a decent job instead of doing software development". Maybe not going full Gibbons, but I definitely see a lot of people move out of the software world as they get older.

I'm 31 and I worry a lot about this issue. I live in the Washington DC area, which is extremely expensive. I am a freelance consultant, but my hourly rates are not very high. I have one client, and if someone in official IRS capacity were to look at us, they'd make my client make me a wage employee, the relationship we have is clearly not a subcontracting position. But this arrangement makes it possible for me to earn more from them than I would have as an employee. I don't now how that works, health insurance can't cost that much (I'm on my wife's now), but everywhere I've been has acted like a $50k employee == $100k subcontractor. Even paying for my own health insurance, my own vacations, and deducting my own taxes, I'm still netting more than I'd gross as an employee. I don't get it, but I'm not going to complain too loudly. And that not even getting into the cost savings I have from not driving, not eating out all the time, not getting sick all the time, etc.

My wife has a fulltime engineering job working for the government. We have a small condo that is just about the cheapest sort of place you can get around here without living in a rathole. We have one new car between the two of us, which works because I work from home and don't drive (I have a 15 year old car). Between our two salaries and the fact that we cook better than most restaurants, we live comfortably.

But I worry about what having kids will do to us. We would certainly have to buy a house. The condo is almost too small even for the two of us right now, but "fortunately" I didn't have a lot of stuff to begin with because I've never been paid very well. I have always risen to a head leadership position amongst developers wherever I've worked, but it has never turned into anything meaningful. "We appreciate your work!" would have a lot more meaning if it came with greenbacks.

If she decided to stay home, it would cut our income in half. Not to mention that we'd have to find private health insurance. I just don't see a bigger place plus half-income working. We need to either move in-state (which she doesn't want to do) or I need to make more money.

I'm reluctant to look for a job because I've not had good experiences working in offices. I don't enjoy the type of work I'm doing or would get hired to do. I like programming, a lot, just not this same, old, bullshit CRUD all the time.

I had good grades in college. I've always had strong programming, math, and science skills. I've always had lots of interesting side projects. I get along with people really easily. And I've never been able to find a good match for a job. The only places that ever call me back are shotgun recruiters and consultoware dungeons. It's disheartening.

I got really depressed with the consultoware field about three years ago. I lived off cash for a month while I looked for a new job, and ended up taking a huge salary cut to get into the only product-based startup that has every returned my emails. Turns out, they stuck me in their own consultoware project. After a year, they fired me without telling me why. I'm pretty sure it was because I was very unhappy, had worked it out so that none of my work was very much effort, and fell back to only putting in as much effort as was required of me, which was less than the 60 hours a week they expected.

I was on unemployment for a couple of months. I applied to everywhere I had ever wanted to work. I figured I had a bit of a time window and, at least in the first 2 months, wasn't terribly desperate to have a job right away. I reasoned I could "hold out for my dream job." Out of 30 job applications, not a single person called me back.

Eventually, a friend got me an introduction to the company he worked for at the time. I started contract-to-hire with them, and when the intro period was up, I took a chance on an ultimatum of letting me stay freelance or letting me leave, I would not take a salaried position. I've been working for them for 2 years now and it's been decent. I have a good working relationship with my client, he loves my work, they pay me, I don't go in to any offices, and sometimes the work is a little interesting. But, it still doesn't pay very well, in the grand scheme of things. I don't think I'm being paid what I'm worth.

It feels like the only out for me is to start my own company. I think I would really like to do that, but I don't have the funding for it and I don't know the right people to get funding.

in 30, i do code and make a generator code to reduce my code time.Do you expertise in php and java or other language.. feel free to have fun..I think still lot people doing same job writing customize application.

Older developers get sent off to a farm in the country, where they will have the space to run around and play, in a way that they never could in the cubicle maze. You never see or hear from them again because they are just so happy there, and also because all the fiber (or copper, if they were naughty) to their premises goes straight to the HappyFunNet, which doesn't have a peering agreement with our boring old Internet yet.

But they're totally still working and not being replaced by dumber, cheaper kids fresh off the boat or fresh from the diploma mill. Totally.

If you aren't lucky enough to work for a company that values the aptitude of older workers, even without domain-specific experience, your options are to become a technically indispensable genius, capable of writing metacode that the younger chimps can turn into working applications without much hand-holding, or you can become a person that spends increasing amounts of time firewalling those experts and chimps from the people who understand money and people better than computers.

Bravo. I love JPEG. Amazing that it's been 23 years since its release and it remains as useful as ever.

I remember what it was like to watch a 320*200 JPEG image slowly build up on a 386SX PC with a VGA card. Today, a HD frame compressed with JPEG can be decoded in milliseconds. This highlights the secret to JPEG's success: it was designed with enough foresight and a sufficiently well-bounded scope that it keeps hitting a sweet spot between computing power and bandwidth.

Did you know that most browsers support JPEG video streaming using a plain old <img> tag? It works also on iOS and Android, but not IE unfortunately.

It's triggered by the "multipart/x-mixed-replace" content type header [0]. The HTTP server leaves the connection open after sending the first image, and then simply writes new images as they come in like it were a multipart file download. A compliant browser will update the image element's contents in place.

This is very promising. Images by far dominate a web page, both in number of requests and total number of bytes sent [1]. Optimizing image size by even 5-10% can have a real effect on bandwidth consumption and page load times.

JPEG optimization using open source tools is an area that really needs focus.

There are a number of lossless JPEG optimization tools, but most are focused on stripping non-graphical data out of the file, or converting the image to a progressive JPEG (since progressive JPEG's have rearrange pixel data you can sometimes get better compression since there may be more redundancy in the rearranged data). Short of exceptional cases where you can remove massive amount of metadata (Adobe products regular stick embedded thumbnails and the entire "undo" history for an image) lossless optimization usually only reduces file size by 5-15%.

Lossy JPEG optimization has much more potential. Unfortunately, beyond proprietary encoders, the most common lossy JPEG optimization exclusively is to reduce the JPEG quality. This always felt like killing flies with a tank, so advances in this area would be awesome.

I've written extensively about Lossy optimization for JPEGs and PNG, and spoke about it at the Velocity conference. A post and my slides are available[2].

JPEG has shown amazingly good staying power. I would have assumed "JPEG is woefully old and easy to beat" but Charles Bloom did a good series of blog posts looking at it, and my (non-expert and probably hopelessly naive) takeaway is that JPEG still holds its own for a 20+ year old format.

For improving general-purpose gzip / zlib compression, there is the Zopfli project [1] [2]. It also has (alpha quality) code for PNG file format; since this functionality wasn't originally included, there are also third-party projects [3].

You might be able to shave a percent or so off the download size of compressed assets.

(For context: libpng is a "purposefully-minimal reference implementation" that avoids features such as, e.g., Animated PNG decoding. And yet libpng is the library used by Firefox, Chrome, etc., because it's the one implementation with a big standards body behind it. Yet, if Mozilla just forked libpng, their version would instantly have way more developer-eyes on it than the source...)

We've been using http://www.jpegmini.com/ to compress JPGs for our apps. Worked OK, although we didn't get the enormous reductions they advertise. However 5% - 10% does still make a difference.

We've been using the desktop version. Would love to use something similar on a server, but jpegmini is overpriced for our scenario (I'll not have a dedicated AWS instance running for compressing images every second day or so). Will definitely check out this project :)

In fact, on a JPEG-heavy site that I was testing with FF 26, there was such a degradation in terms of responsiveness that transitions would stutter whenever a new image was decoded in the background (while preloading).

It made the effort to save 2-4% in size wasted with a worse user experience.

If my goal were to compress say 10,000 images and I could include a dictionary or some sort of common database that the compressed data for each image would reference, could I not use a large dictionary shared by the entire catalog and therefore get much smaller file sizes?

Maybe images could be encoded with reference to a common database we share that has the most repetitive data. So perhaps 10mb, 50mb or 100mb of common bits that the compression algorithm could reference. You would build this dictionary by analyzing many many images. Same type of approach could work for video.

Data compression and image compression is a great way to improve the overall internet, bandwidth and speed. Maybe as important as new protocols like SPDY and js/css minification and cdn hosting of common libraries.

As long as ISPs/telcos don't go back to the days of AOL network wide compression to reduce bandwidth beyond low quality I am for this at service level like facebook/dropbox uploads. I hope this inspires more in this area. Games also get better with better textures in less space.

Still to this day, I am amazed at the small file sizes macromedia (adobe now) was able to obtain with flash/swf/asf even high quality PNGs would compress. So yes we all have lots of bandwidth now but crunching to the point of representing the same thing is a good thing. With cable company caps and other bandwidth false supply shortage that focus might resurge a bit.

It's not clear from the article, in their "comparison of 1500 JPEG images from Wikipedia" did they just run through the entropy coding portion again or did they requantize? (I suspect they did jus the entropy coding portion, but hard to tell).

Getting better encoding by changing the quantization method can't be purely a function of file size, traditionally PSNR measurements as well as visual quality come into play.

Good to see some work in the area, I will need to check out what is new and novel.

That said, a company I worked for many moons ago came up with a method where by reorganization of coefficients post-quantization, you could easily get about 20% improvement in encoding efficiency, but the result was not JPEG compatible.

I have heard similar things about GIF (that there are optimisations that most encoding software does not properly take advantage of). But I haven't seen any efforts, or cutting edge software that actually follows through on that promise. The closest I've seen is gifscicle, which is a bit disappointing.

What would be great if there was some way for an animated gif's frame delays to opt-in to being interpreted correctly by browser- That is, a 0-delay really would display with no delay, and so optimisation strategies involving the splitting of image data across multiple frames could be done- and when read in by a browser, all frames would be overlaid instantly, module loading time.

What other things can be done to further optimise animated gif encoding?

I'm actually disapointed. I hoped they developed a still image format from Daala. Daala has sigificant improments such as overlapping blocks, differently sized blocks and a predictor that works not only for luma or chroma, but for both.

I like that Mozilla is improving the existing accepted standard, but using modern (mostly patented) codec techniques we could get lossy images to under 1/2 of the current size at the same quality and decode speed. Or at a much higher quality for the same size.

The speed modern web concerns me. The standards are not moving forward. We still use HTML, CSS, Javascript, Jpeg, Gif, and PNG. Gif especially is a format where we could see similar sized/quality moving images at 1/8th the file size if we supported algorithms similar to those found in modern video.

In all of these cases, they aren't "tried and true" so much as "we've had so many problems with each that we've got a huge suite of half-hacked solutions to pretty much everything you could want to do". We haven't moved forward because we can't. WebP is a good example of a superior format that never stood a chance because front-end web technology is not flexible.

This is so dumb, there are a million JPEG crushers in existence but instead of advocating the use of one of these Mozilla writes their own? Why not support webp rather than dismiss it due to compatibility and waste time doing what has been done before.

Short version: http://criticker.com sells access to their API for apps. Any API account can retrieve a list of all users it registered on the site, then retrieve the cleartext password for each user it created.

There are so many WTFs in this whole situation that it's a wonder criticker has managed to keep the website online. Which is a shame, as it looks like a really useful website.

Whenever I get that plaintext password "vibe" on a site, I like to make my password something somewhat degrading go the site; like "thisSiteSux!", but slightly more vulgar. It's not my fault if they see it.

Once after having gotten the vibe, I ended up on phone support with the site in question. At some point I was instructed to "log back in with ummmm that uhhh same password you signed up with...." I could tell that my plaintext-dar hadn't failed me that time :)

Somebody is trying to outshine Mt. Gox in terms of amateurism. I wouldn't be surprised to find a number of other vulnerabilities (SQL injection ?). Who the hell thinks it's OK to store non-encrypted passwords in this day and age? It's not like you don't have a major security breach every month...

Also, I like the 'handler.php' endpoint returning some kind of ugly pseudo-SOAP. Ugh.

It seems to me, that many companies think, that computer science is just plainly simple. You can just put any task to a new bachelor or even student that claims he can program.

I learned the hard way, that even the creation of internal APIs of software is hard, since you can make many errors. I made many errors, after I came from university. After I made them, I knew it better, because I had to manage an other developer that had to use it and I saw what a mess it was.

External APIs are even more difficult to create, because such things as security and others have to be covered ... but still it seems many companies thing any stuff scribbled by a student in the first semester would suffice.

Despite the warning to the company back in 2010, I'm not sure he should be publishing this. He's putting the 2000-odd users at risk by teaching us how to get their passwords and usernames like that, it's even worse if we can get at email addresses too. I would bet the majority of those registered reuse the passwords.

I don't care if this comes off as trolling, but here it is: as I read through this, I thought to myself, much like the author, "how appaling!" - then I saw the word "PHP" - and went "oh, well, that figures".

If anyone from the Criticker team is here on HN, I'd be happy to help you guys get this resolved -- my company Stormpath (https://stormpath.com/) provides a really secure way to handle user accounts.

I'll help you guys integrate, or -- if you prefer, I'd be more than happy to dive into your source and help figure out problems and get them resolved. We have a pretty huge team of security experts, and we're all more than happy to help.

Raw password storage is more common than we like to believe. A simple way for webapps to communicate that raw passwords are not being stored would be convenient. A small 'NORAWPW' image in the footer perhaps. it would ease my worries, especially with cryptocurrency related webapps.

Well, bad APIs and security is the norm, unfortunately. As example DigitalOcean doesn't use signatures, and uses static API key. Would you consider that to be secure? Especially if we aknowledge all the weakness of SSL/TLS/HTTPS.https://plus.google.com/+SamiLehtinen/posts/1qFhf9fAbU6

I hope that the author notified Criticker about these issues before putting them out there on the internet. Not doing so would be extremely irresponsible and is sort of screwing over the users of Cricketer.

This post is more about security than just APIs... dislike title. Also.. I don't see how this is an issue. If the user signs up via your app... and you wanted their password. You have it. Sure it's a big deal if someone steals your key... but if you always do it over SSL, they have to steal the "phone" or the "app" that you use. And if they steal the phone... they can use things like "email reset password", because email will most likely be logged in anyway.

Why are there so many negative comments? Maybe those posters are vastly underestimating how many people that just start out read HN. I think it's a pretty good post to read after something like "X in Y minutes - Python" to get a very quick grasp of what the language is like.

I'm also not ashamed to say that despite having written quite a few LOC of Python I wasn't aware of named slices for some reason and I think they can clear up some chunks of code I have produced (make it more readable)

I've been using Python and Ruby on and off for a couple years (largely because I haven't found the need to use it seriously day job or side projects).

One thing that strikes odd for me is how people describe Python/Ruby are way more readable than Java.

I felt that Python, while more readable than Ruby (because Python uses less symbols), still contain more nifty tricks compare to Java.

It's true that the resulting code is less code but behind that less line of code bugs might linger around because there might be plenty "intents" being hidden deep in the implementation of Python.

The Python way that is touted many times is "explicit is better than implicit" seems to correlate better with the much maligned "Java is too verbose".

Anyhow, the other day I was refreshing my Python skill and learned the default implicit methods that I can override ( those eq, gte, gt, lte, lt) and I wonder how overriding those resulted in less lines of code compare to Java overriding equals, hashCode, and implementing one Comparator method than can return -1, 0, 1 to cover the whole spectrum of gte, gt, lte, (and even equality, given the context).

Coming from a long history of languages like BASIC and Pascal, I will bookmark this tutorial. It seems to open up a lot of interesting Python features that were, quite frankly, not always easy to understand when described in plain text, but now seem pretty simple when presented as examples.

I'll also think about the "collection of simple examples" next time I want to document something.

A good reference, to be sure, but man, do I resent the term trick in programming. It implies a deception, or something clever that you wouldnt think to look for, like opening a wine bottle with a shoe. These arent tricks, theyre (largely) standard library features that you would simply expect to exist. But maybe Im underestimating the NIH effect.

This is a cool idea, my wife and I did the cross country ride (Oakland to Washington DC) in sleeper cars and it was a lot of fun. Some great scenery and a lot of time to think. Weird things at the time were the plastic utensils in the dining car seemed a bit jarring, and of course train stations in the USA can be fairly tawdry compared to European stations.

There are lots of things that challenge rail in the US, perhaps the most obvious is private ownership of the rails themselves, as opposed to freeways which are state owned and maintained. That shifts a lot of costs on to fewer payers. It also means the rail owner's trains get priority (in this case freight) so scheduling is quite difficult to maintain. There is also a tremendous amount of bureaucracy and complexity built into the system which I've found resists even modestly determined prodding. As part of an exercise in home schooling we tried to find out what it actually cost to put in the San Jose light rail in order to compare that to what we had learned about the Northern Pacific Railroad at a wonderful museum in Sacramento. All of our efforts to get what I had assumed was just boring public data were met with suspicion and resistance. That was pretty weird.

This is really cool and a great promotional idea for Amtrak. There used to be a universally romantic notion about long-distance travel by trains that I'm sure Amtrak is trying to bring back to the forefront of Americans' minds. Check out this article by a freelance writer about her cross country trip by rail: http://www.washingtonpost.com/lifestyle/travel/riding-an-amt.... I'm sure that's the type of promotion that they're looking for.

The inaugural residency ("beta test") happened in February, when an NYC writer, Jessica Gross, was given a free 39-hour ride between NYC and Chicago (and back) in a sleeper cabin. She wrote about it in The Paris Review:

I had to travel by Amtrak (forced, I lost my passport and I couldn't board a plane) from LA to NYC via Chicago once. ($230)

Not only I got to see a lot of places, but I truly accomplished more quality work than I normally do in a similar time-frame. Nearly zero distractions other than the occasional beautiful sight and the rest breaks I took, I had the chance to visit the surroundings of the major stations the train stopped at.

Perhaps it's my ability to sleep in the weirdest places, but I found the sightseer lounge car couches very comfortable. I went to bed 1hr or so after dusk, and I woke up with the morning lights. That's about 8h of sleep or so every night.

PS: Make sure to download a metric shitton of music before you do this. The sightseer car (the only place where one can truly work comfortably in the train) is usually a noisy place.

I would NOT advise applying, as it means essentially signing away the rights to the work you send them as a sample just by APPLYING:

"6. Grant of Rights: In submitting an Application, Applicant hereby grants Sponsor the absolute, worldwide, and irrevocable right to use, modify, publish, publicly display, distribute, and copy Applicants Application, in whole or in part, for any purpose, including, but not limited to, advertising and marketing, and to sublicense such rights to any third parties."

"Applicant grants Sponsor the absolute, worldwide, and irrevocable right to use, modify, publish, publicly display, distribute, and copy the name, image, and/or likeness of Applicant and the names of any such persons identified in the Application for any purpose, including, but not limited to, advertising and marketing. For the avoidance of doubt, ones Application will NOT be kept confidential"

"Upon Sponsor's request and without compensation, Applicant agrees to sign any additional documentation that Sponsor may require so as to effect, perfect or record the preceding grant of rights"

The brilliant thing is not the nature of the program, but the timing. Americans are driving less and less, for a lot of reasons, but in my opinion the desire to use mobile devices in idle time is paramount.

I for one am super excited for Amtrak and what this will mean for the travel industry. As a travel blogger that has been in the industry for a long time, I've worked with several brands that have no idea what to do with "new media". This is setting a fantastic example for other brands that will hopefully catch on.

This is a great idea! I traveled on Amtrak a few years ago from NYC to New Orleans (30 hours) with my wife and young daughter and had a great time - it just a pleasant way spend some time, talking, reading, thinking. During the day we set up a little play area for my daughter in our sleeper car and she loved watching everything go by. We were by far the youngest in that section and we often sat with retired folks at meal times (and had some great conversations).

Although it was impossible to get important work done over a VPN over Amtrak's Wi-Fi (I really need VPN access for my job), it was still a fun journey from Charlotte to Philadelphia on my way home from a new years party. I worked from home that day, and while I was mostly incommunicado I could still get a lot of code work done on my own machine that I'd been putting off.

I'd love to do this for coding, except the onboard wifi is unusable. I tend to prefer Amtrak to any airline in basically every area except wifi, which is bizarre considering how recent a development in-flight wifi is.

This promotion is of course targeted toward writers but how amazing would it be to have 2-5 focused days to hack while on the train? Perhaps mobile connectivity would be an issue in some locations but the upside of the focus-time would probably be greatly productive. A train hack-a-thon.

This is an excellent piece with a couple of important lessons on how to think effectively:

- The ability to think creatively

- The ability to substitute initially attractive moves with well thought out log-term effective ones.

However on the other side of the coin is what we hackers face more often - Analysis Paralysis.

Once you fall into the Analytical Mindset, there is such a thing as being too analytical. Sometimes if it feels right you just go ahead and F*ing do it.

Otherwise the fear of making a wrong decision will paralyze you into inaction - which is worse than a screw-up (usually). So it is a balancing act - think enough but not too much. Analyze but not to the point of paralysis.

When my wife was learning to play the piano, her teacher used to say "if you're going to make a mistake, make it loud so we can hear it and fix it." I make my students do math in pen for the same reason -- instead of silently making the same mistake over and over again, it gets made once, analyzed (by the students), and fixed. This bothered the students at first, but they've come around and become much more thoughtful about what they write.

> Teaching chess is really about teaching the habits that go along with thinking, Spiegel explained to me one morning when I visited her classroom. Like how to understand your mistakes and how to be more aware of your thought processes.> " I saw Spiegel trying to teach her students grit, curiosity, self-control, and optimism."

Which is really what teaching is about. I think most teachers know this, and we get a fairly healthy dose of it in professional development every week. I'm a math teacher, but the training I get during the school year isn't in math, it's in things like "accountable talk". It sounds like the teacher in this article is particularly gifted and practiced.

This isn't just for classroom teachers. The same concepts matter for parenting and in the workplace.

And I really believe that's why we seem to win girls' nationals sections pretty easily every year: most people wont tell teenage girls (especially the together, articulate ones) that they are lazy and the quality of their work is unacceptable. And sometimes kids need to hear that, or they have no reason to step up.

This could apply to boys as well as girls, and indeed to anyone at just about any age; sometimes we need to be told that we're not measuring up. I am reminded of Philip Greenspun's story about the venture capitalists who wrecked ArsDigita, the company he had built (from http://waxy.org/random/arsdigita/):

[F]or most of this year Chip, Peter, and Allen [the VC Board members and CEO] didn't want to listen to me. They even developed a theory for why they didn't have to listen to me: I'd hurt their feelings by criticizing their performance and capabilities; self-esteem was the most important thing in running a business; ergo, because I was injuring their self-esteem it was better if they just turned a deaf ear. I'm not sure how much time these three guys had ever spent with engineers. Chuck Vest, the president of MIT, in a private communication to some faculty, once described MIT as "a no-praise zone". My first week as an electrical engineering and computer science graduate student I asked a professor for help with a problem. He talked to me for a bit and then said "You're having trouble with this problem because you don't know anything and you're not working very hard."

The unparalleled Think Like a Grandmaster by Alexander Kotov explains not only planning and strategy in chess but also the methodical use of time.

Assess the position. Identify the variations to consider. Evaluate each variation for a roughly equivalent period of time. Choose the strongest. Sanity check you haven't missed something. Move.

Repeat, exhaustively, without losing focus, for a multiple of hours.

Edit: the parallel with startups is clear. In chess, you can only think so far ahead. This may be one or two moves, or for a strong player it may be five or six. Either way, you have a visibility horizon but you have to move.

For people interested in brutalizing their egos and learning how to think in some of the ways this article mentioned -- longer-term, more deliberately -- I cannot strongly enough recommend learning how to play Go (http://www.britgo.org/intro/intro2.html).

It's a less popular, but probably more suitable game than chess. The individual rules are far simpler than chess, but the game play is way more complex, with lots of edge cases.

It also has a built-in handicap system that makes it possible for players of different ranks to play fair games, and the game board size can be scaled down for beginners while they learn the basics.

I don't like it.I think it's very important to make people comfortable with the idea that they often make mistakes and their thinking is not up to par. You need to make them comfortable thinking about their thinking and being open about it. To do that you need to point a lot of mistakes and encourage them to think about the process leading to them. That's difficult for many people (because of ego mainly).However the woman from the article doesn't achieve it in my view. Her way is to inflict guilt:

>>Spiegels face tensed. We did not bring you here so that you could spend two seconds on a move, she said with an edge in her voice

>>This is pathetic. If you continue to play like this, Im going to withdraw you from the tournament,

>> Im very, very, very upset to be seeing such a careless and thoughtless game.

I call it bullying. Why not just focus on the thought process and try to detach emotions from it, that's what the kid needs to learn in the first place:

-"How much time did you spend here?"

-"Two seconds"

-"You see, spending two seconds here led to a blunder which you suffered from for rest of the game, we need to work on your thinking habits. There is not much time for that now and as I screw up as your teacher not teaching it to you before for now I only suggest that once you decide on a move, look away from the board, try to reset your mind, sit on your hands and look at the board as freshly as possible for 15 seconds to see if you are not blundering anything".

Then you add: "Thinking habits in chess are everything, a lot of brilliant players never make progress because occasional slips and a lot of not-so-brilliant ones enjoy success because they avoid simple mistake thanks to good habits". "We are going to work on this after the tournament, there are many ways. Rest assured it's main problem chess players have, you are not alone. How well people improve in that area is going to be a difference between winning and losing so it's exciting area to focus on". Then you discuss ego, how not willing to admit your own mistakes is major road block and how it's perfectly ok to discuss mistakes but it's not ok to be happy about them or comfortable with them! You need healthy dose of ambition you need to be disappointed... but optimistic and believing you can get better. Feeling guilty won't lead there. Feeling like you are disappointing other people won't lead there (even if it won't be long term, it's dependence on external motivator - disappointing someone. At one time this someone won't be there). If you act like the woman from the article people will avoid you - nobody wants to feel guilty after all. They want to improve, work on their thinking, compete and have fun.

Her way shows characteristics of bad teachers and bad parent. I've encountered both and I think it's the best way to kill natural joy and passion quickly even if you get some quick results - it won't be long term and it won't be to maximum potential.

I think this falls in to the same trap as the stories earloer about the LHC Physics group that have abandoned PowerPoint for a whiteboard.That something is a good idea for a particular intellectual exercise its a good idea generally for thinking, learning,success!No, chess is a quite particular skill where you can't afford to make mistakes and the problem is bounded and can be fully rationalised. Most creative or scientific endeavors are quite different and some maybe be best learnt by experimentation trial and error.I'm sure she has a great way to teach chess but I don't think its a panacea.

By principle, I noticed something that looks like selection bias: she seems to only criticize the decisions on wrong moves without comparing them to when he did well. After all, maybe he just spent one second on the good moves because his instinct is very good?

I know in practice he should have used the available time, but I wanted to underline the one flaw of the article; the rest is pretty good.

What I think makes her approach powerful is that she does BOTH of two very important things: she expects the kids do more than they are, and she only asks them to take the step right in front of them.

I see a lot of teachers/parents/bosses doing one or the other. They demand more of a kid, but fail to properly assess where the kid is, and therefore ask a little bit too much, setting the kid up for failure. Or they acknowledge where the kid is but fail to really push them to take the next step, leading to complacency. Both ultimately lead to fear.

In practice doing it right requires immense knowledge of both the subject and the student, which is what makes it hard. But when done right, people respond by growing very fast. And the experience, while sometimes exhausting, feels humane and healthy.

My number one concern with this approach is that it creates an extreme dependence on an external locus of motivation. This seems like it would be great if you want to turn children into excellent cogs for your machine, as in the industrial age, but it could be horrible for creating pioneers and innovators.

I would welcome approaches like this when combined with something like the kind of educational freedom given at a montessori school. In this case, we're looking at a chess team. So maybe the children are participating voluntarily or maybe they aren't.

I'm sympathetic to the ideas in the article, but is there any, you know, actual /data/ to support that calling kids lazy and telling them their work is unacceptable is an effective way to teach? I talk to people who study this stuff and do consulting for people like the US military (who aren't particularly known for their touchy-feely approach to training), and, as far as I can tell, this doesn't work particularly well.

It's really difficult to get students to think hard about the feedback you give them. This article gives a great way to do that, and I think it's a large part of the success. Simply making them confront their own mistakes honestly.

I think this falls in to the same trap as the stories earlier about the LHC Physics group that have abandoned PowerPoint for a whiteboard.That something is a good idea for a particular intellectual exercise its a good idea generally for thinking, learning,success!No, chess is a quite particular skill where you can't afford to make mistakes and the problem is bounded and can be fully rationalised. Most creative or scientific endeavors are quite different and some maybe be best learnt by experimentation trial and error.I'm sure she has a great way to teach chess but I don't think its a panacea.

I can't stress enough the main step to increase thinking skills and self is to introspect. Every challenge requires it to fully learn and grow from it. When solving problems for example, it is not only the solution that is important but also the very process to arrive to that solution. Aka: "Thinking" and "Meta-Thinking".

If I understand the argument there correctly, the responder is saying: Nobody should ever use this functionality, instead they should always check that the date is not None. So, we should leave this broken, because we don't want to break backwards-compatibility with that class of applications that nobody should ever write.

That philosophy, taken to its logical conclusion, results in everything being broken forever.

I've never seen a good argument for anything beside "false" to be considered false. Likewise for "true". Keystrokes are not a commodity for most coders, and compilers are not dumb; just be explicit and write "!= 0" or whatever.

I just got bit by this a few days ago. I was creating an event scheduling system that uses either repeating entries with a datetime.time, or one time entries with a datetime.datetime. I had code that said "if start_time" to see which it was, and discovered later that midnight evaluates to false. It's not the best idea.

Ignoring Python for a bit and thinking as a designer of some hypothetical future language: there is a nice rule given here for evaluation in a Boolean context. I wonder whether it should be taken as a general guideline for future languages.

The rule, in its entirety, is this:

- Booleans are falsy when false.

- Numbers are falsy when zero.

- Containers are falsy when empty.

- None is always falsy.

- No other type of value is ever falsy.

I can think of two ways we might possibly want to alter the rule.

The first is to expand the idea of number to include arbitrary groups (or monoids?), with the identity element being falsy. So, for example, a matrix with all entries zero might be falsy. Or a 3-D transformation might be falsy if it does not move anything.

The second is one I have encountered in C++. There, an I/O stream is falsy if it is in an error state. This makes error checking easy; there is one less member-function name to remember. We might expand this idea to include things like Python's urllib, or any object that wraps a connection or stream of some kind.

EDIT: OTOH, there is the Haskell philosophy, where the only thing that can be evaluated in a Boolean context is a Bool, so the only falsy thing is False.

EDIT 2: The comment by clarkevans (quoting a message from INADA Naoki) already partially addressed the above group idea: "I feel zero value of non abelian group should not mean False in bool context."

"goto fail" is a well-known error handling mechanism in open source software, widely reputed for its robusteness: http://opensource.apple.com/source/Security/Security-55471/libsecurity_ssl/lib/sslKeyExchange.c https://www.gitorious.org/gnutls/gnutls/source/6aa26f78150ccbdf0aec1878a41c17c41d358a3b:lib/x509/verify.c I believe Python needs to add support for this superior paradigm. It would involve a new keyword "fail" and some means of goto'ing to it. I suggest "raise to fail": if (some_error): raise to fail fail: <error handling code> Unless there are many objections, this fantastic idea might be submitted in a (short) PEP somewhere around the beginning of next month. There is some obvious overlap with the rejected "goto PEP" (PEP 3163) and the Python 2.3 goto module. However, the superiority of goto fail as error generation and error handling paradigm has since then been thoroughly proven.

I think the interesting part is what is revealed about Python and the difference with something like Ruby.

Python is stable[0] and places a high degree of importance on backwards compatibility.

This behaviour is well documented (and called out for particular note). This reinforces that it is (a) official and (b) not a bug because it is the documented behaviour.

On the other hand Ruby (and most Ruby libraries) seem both less concerned with backwards compatibility, have less thorough documentation[1] but are more willing to change and improve.

There isn't a right and a wrong between these approaches although for most things I think I would prefer something between the two. I think I generally prefer Python in terms of syntax (Ruby is a bit too flexible with too many ways to do things for my taste) but I do wonder if Python will be left a little behind.

[0] Python 2/3 transition is a single big deliberate change.

[1] I have an open Rails issue that I don't know if is a bug or not because there isn't documentation that is sufficient to compare the behaviour with so it is a case of what feels right/wrong: https://github.com/rails/rails/issues/6659

Maybe I'm getting off into the philosophical decisions of the reptile wranglers, but this particular debate sounds a lot like someone made a decision long ago that had ramifications further than expected and now the justification is engrained, things are built on it, and no one's willing to make the 'correction.'

I came across a similar issue when using rails the other day, where I gave my model a boolean field that had presence validation. The presence validation of the boolean field fails if the bool is set to false, had me confused for a while, but It wasn't a big enough issue for me to research/report.

In every other language I've used, a time value of 0 is used when a datetime only contains a date and doesn't have a specific time. The existing behavior would make sense in that context. I know Python also has a separate date object, are the two interchangeable enough that you could mix and match without problems?

This offers a counterexample to the simplistic notion that 'duck typing' results in programs that automagically do the right thing. The reality is that duck typing does not relieve you of the responsibility of understanding the semantics of the elements you use to construct a program from.

Creeping semi-booleans make me very uncomfortable. But what's the alternative? A-values and I-values? A "" for questions unanswerable in the type system? Just punt and let Javascriptisms take over the world?

Here is a perfect example: John Carmack does a great job of rocking the white-board in this wonderful presentation. He starts out with a tablet, and uses that to track his discussion points, then hits a deep-dive on the white-board at approximately 00:18:45.

I find this style absolutely engaging. Presentation software like PowerPoint has its place, but can make it all-too-easy to move through material too quickly. On the other hand, actually drawing and writing things out while discussing the topic slows things down a bit, allowing the audience to engage and understand the topic at a more learning-friendly pace. I personally find this "show me don't tell me" style of white-board presentation refreshing and conducive to my understanding of the topic.

If non-technical speakers spent less time faffing around before the session making awful looking powerpoints, and more time learning how to speak engagingly, the world would be a much better place.

This said as an Audio/Visual Operator who has spent hundreds of hours at a sound-desk watching technically inept speakers fail to impress - no matter how flashy the animations.

The worse thing over the last few years is 'Prezi'. It's a powerpoint alternative which ostensibly makes it easier to make awesome looking graphics.

The 2 problems with it are that it's a hell of a lot harder to actually present on a second screen, so you end up having to drag windows around, and that speakers are still under the impression that because you have swooshes and zooms and text folding inside other text, suddenly it's more likely for people to find the presentation content interesting.

The trouble with BAD technology, is how do you fight it? The normal way is by competition - making better tech. But when the concept itself is wrong, but somehow culturely accepted...? Any ideas?

I can see why scientists like whiteboard. In the old days, if you watch old clips from the 30s, 40s you would see scientists talking to their fellow peer with chalk and cardboard. They could start by saying "okay so we know this gas law from 1800s and then we saw this new behavior and we started investigating blah blah and then we came up with this new equation and here is the proof blah blah." That was the old days. Whiteboard worked fine.

But was it fine? If you are delivering to five people, probably. What about 10, 20, 30, 100, 300?

These are the things to consider when giving a presentation:

1. your target audience

2. time constraint

3. technology and tools available

4. scope of your presentation (is this a lecture, a short 15-minute progress report, or a workshop)

Without slides, the participants go further off-script, with more interaction and curiosity, says Andrew Askew, an assistant professor of physics at Florida State University and a co-organizer of the forum. We wanted to draw out the importance of the audience.

You see, if you are giving a two-hour workshop to a small group of scientists which everyone knows each other, the discussion can become interesting. But if you are giving a 30-minute workshop, a 30-minute talk to a larger group of people, whiteboard-free-style presentation breaks down.

The main problem is that only a handful of people will fully comprehend what the speaker is up to regardless of which method. Some people are slower at picking up new ideas. It could be experience, language barrier (and sometimes it's the speaker's accent) or misunderstanding. People fear of asking dumb questions in front of a large group of experts so in the end it's just an interaction of the speaker with a handful of experts. The rest will just nod and follow on.

Neither powerpoint nor whiteboard could solve the main problem entirely. But with powerpoint, one could traverse back and forth and audience does not have to suffer illegible handwriting (and in large group people could be sitting in the far back). This is something whiteboard-only discussion can't.

So if they run a small group discussion, chalkboard is fine. But if they run a large group discussion, I argue start with slides and supplement with whiteboard. Slides should be there to deliver textual information, graphical information which are hard to explain or to follow on a whiteboard.

PowerPoint isn't the enemy. Poor use of PowerPoint is the problem. Bad presenters is the problem. People switching over to white boards won't make them better presenters, now they'll be communicating poorly in a messy unshareable medium.

The solution isn't no PowerPoint. The solution is teach people how to communicate. How to present to both technical and nontechnical audience. How to write an executive summary / elevator pitch.

I don't understand this. Most of my teachers use blackboards and it's really annoying to follow a presentation like that, you have to wait for the person to write, you have no slides later on to support your notes, and since you have no slides online you have to write everything they write, so you can't even listen properly to the talk.

And some stuff are just clearer on slides... I don't really see a lot of benefits in whiteboard-only lectures. Combination of whiteboard and slides are best.

I can still think of some great people who don't use slides but it's rare and a few people do it well (Gilbert Strang comes to my mind[1]).

As a particle physicist, I wholeheartedly welcome this. Our meetings, of which we tend to have 4-5 a week, are usually Powerpoint* orgies. Because of the intensly dense slides, its often hard to follow, and people don't listen to the reader but read the slides. Even worse, they think "I'll read the slides later" and work on their laptops in meetings. It's not rare to see 2/3 of a meeting work like sheep on their laptops (especially in larger meetings and talks), and only a small fraction is actually doing something talk related like viewing the slides, or doing actually urgent work. As a consequence, we have banned the use of laptops during talks in our group. What is completely normal everywhere else was a small sensation in our group, but I think everybody agreed that it is better now.

We can't realistically ban Powerpoint, since as experimentalists we have to discuss lots of graphics and plots. What we did try once was to use our lab books instead. Every (PhD, Masters) student would write a summary of their week's progress in their lab books, including printed out plots, and we would project it with one of these old-fashioned book-projectors. It was nice because you could also go back and look at the details in the lab book, and it would give you an incentive to keep your books correctly. Unfortunately, it became unpractical as our group grew, and also because we have a lot of collaborators from other groups who are connected via video.

3. Even if you write everything down, it would still be less information than what someone could add in the Powerpoint

4. Powerpoint is much more legible

5. It is easier to go at your own pace during and after the presentation if someone is using a Powerpoint. If someone is using a white/blackboard they are going to erase the last part very quickly after they finished writing it down.

I've heard that writing equations on a whiteboard paces the talk and give the audience time to digest. With a slideshow most presenters will go at a pace comfortable for them, but that typically ends up being too fast for the audience.

Now that's a great experiment! I think the use of Powerpoint is useful and mandated under certain circumstances (e.g. if you want to show experimental data), but when discussing a concept with your peers, working on a whiteboard is better for various reasons:

1. It forces you to think more about what you want to say and how you're going to write it down beforehand.

2. It sets a uniform pace for your presentation (writing stuff down is harder than advancing slides)

3. It lets your audience follow the train of thought that lead you to the results your presenting and allows your content to unfold before their eyes.

4. It invites participation and allows for easy modification and adaption of your content during your presentation (try that with Powerpoint).

That said, structuring a good whiteboard talk/presentation is hard work too and I've seen many people (including professors) fail at it.

Reminds me of the Anti-PowerPoint Party[1] (which was linked to on HN at some point). I would also like to say that I personally find whiteboard presentation much easier to follow. I taught a little bit too, but used slides, because it was easier. Maybe banning computer slides isn't such a bad idea...

Having the speaker write out things on a board also has the advantage of giving the listeners time to think through what has gone before. In my experience this leads to more interesting discussion.

I'm a teacher of economics and the only time I use slides is when I have to present a lot of data or literal text like the statements of theorems. Even in these situations I think distributing printed handouts works much better. But that involves logistics and expense.

I'd say it makes sense for equation-heavy fields. The biophysics stuff I did during my Phd, however, worked very well with Powerpoint. I'd always have the images-and-diagrams-only presentation without text as my goal, which I usualy managed to almost-achieve.

Based on that, my mom setup a service to build that kind of presentations at http://www.emilypresenta.com/ the site is in spanish for now), including finding, buying the photos/icons and the provide a basic layout for the talking part.

Good for them, but the truth is that every analysis group at CERN uses Beamer, Keynote or even Powerpoint for the almost everyday meetings via pdfs submitted to Indico (coupled with Vidyo). There's no reasonable alternative. Another completely different scenario are lectures or theoretical talks, there it never made much sense/it's a waste of time.

well word has long since ceased to be relevant (in the code literate world) Markdown, wiki mark up or similar has taken its place (and LaTeX always was close to ending it )Now PowerPoint will join it as S5 and the like take over.

It's not quite so big as the headline, but it's still good. Here's the scoop:

The FAA has long had rules for model aircraft, which would include many small "drones", and under which you can personally operate them now. You're supposed to stay low and away from stuff you could damage.

They also (as of fairly recently) have some rules for UAS (unmanned aerial systems) that are more like the rules for real aircraft, and are working on integrating them into real airspace. Thus a UAS requires a certificate and permission to fly--but they're still working on how to do that, so you can't get one yet. If you were to buy your own Predator you couldn't fly it, 'cause it might run into an airliner. Fair enough.

However, they also declared that commercial use of an otherwise-model aircraft turned it into a full UAS, which you currently cannot fly. So you could use a quadrotor to take aerial photos--but you could not get paid for it.

This ruling, in a nice display of common sense, disposes of this last bit, making operation of "model aircraft" the same regardless of intent. You still have to fly safely, and in limited space, but now you can get paid for quadrotor photography, as the FAA no longer has a basis to fine you. You still cannot fly that Predator, though. Sorry.

But I'd be careful before making too much investment based on this decision. At the very least, check into how the appeals process on a ruling like this works; dunno how final it is. You'll also want to check carefully to make sure your intended use can be performed safely by model aircraft; fully unmanned systems probably aren't going to pass muster (unless perhaps your automated avoidance system is really good).

And here is the complaint the FAA originally filed, which includes flying through crowded streets, flying through a tunnel with traffic, flying as high as 1500ft AGL, flying at an individual causing them to jump aside [apparently moments before crashing into a hedge], and flying within 100ft of the UVA Medical Center's heliport [though the video was being filmed for UVA].

Okay, time to build drones full of atmosphere sensors! The weather forecast is about to get a lot better....this is excellent news for crowdsourcing remote data about the atmosphere. I've been pulling pressure measurements from smartphones for a while, but a core problem is that a lot of weather develops over areas of low smartphone density.

Sending out drone fleets will be a most excellent solution; they're reusable, so you don't have to make 200M of them. They're connected and already carry many required sensors. They're coming down in price and at the start of a thriving commercial ecosystem. Can't wait to start building!

So as a pilot of a manned aircraft what does this mean for the safety of my flights? Do I need to start trying to avoid unlicensed, unlit, and unannounced drone aircraft whenever I'm below 400ft AGL?

Edit: Thanks to everyone below for your thoughtful comments. My replies are as follows:1) Regarding model aircraft - I would argue that the low density of these operations at the moment is what has prevented an incident between a manned aircraft and model aircraft. Also the nature of radio control has necessitated that the model aircraft generally be withing sight range of the operator, and as such the operator is still able to avoid other aircraft to some extent. This may not be the case for automated drones.

2) Regarding 400 ft AGL. How do I know that the drone operator won't accidentally end up at 600ft+ AGL (for example), which happens to be at the low end of a standard traffic pattern altitude (800ft)? As far as I know there's no way to know what altitude your drone is above the ground except possibly GPS which does not always give you an accurate MSL (or AGL) altitude. Will all drone operators in populated areas be made aware of local air traffic patterns? What if I want to exercise my privilege to operate below 500 ft. AGL in unpopulated areas? I can think of a lot of cases where I've been below 400ft AGL during takeoff and landing while over densely populated areas.

My perception is that automated drones with no mechanism to avoid manned aircraft at or near traffic pattern altitude at densely populated locations is a big problem.

This reads more like the recent Net Neutrality decision than anything else.

It's less "Drones are free to operate commercially" and more "If the FAA wants to regulate (commercial) drones separately for any other model aircraft, they need to create explicit regulations that apply to them"

What seems explicitly ruled against, is selectively interpreting existing regs as applying to an imprecise (and shifting) definition of "drone" and trying to use a "Policy Statement" as de facto law (by claiming a request for voluntary compliance, but then suing for non-compliance), without following the appropriate procedure or meeting all of the requirements of new regulation.

From a different article[0], it sounds like the guy being sued in this case may have been operating the drone irresponsibly, which opens a window for legislators to put laws in place that will gives the FAA power to regulate drones:

> Pirker operated the aircraft within about 50 feet of numerous individuals, about 20 feet of a crowded street, and within approximately 100 feet of an active heliport at UVA, the FAA alleged. One person had to take "evasive measures" to avoid being struck by the aircraft, the agency said.

Without a law that gets passed otherwise, this also opens up the floodgates to full law enforcement use of drones in public/private airspace. If the sky's open for for commercial use, it's automatically available for law enforcement as well.

I had hopes that there would be some rules/regulations in place first...

edit: assuming the decision allowed use in populated areas, full disclosure that I haven't had a chance to actually read it yet

Here at my little company Fighting Walrus [1] (we make a radio accessory so small commercial UAVs can be controlled via iPad) we are really excited that there has been some forward movement on the legal front. My personal view has always been that commercial drone use would be worked out in the courts before the FAA really got a handle on their (much delayed) roadmap for integrating them into the national airspace. However I would caution that the FAA is probably going to appeal the decision to the full five member safety board. The FAA is not going to give up regulatory control of this class of small unmanned aerial vehicles (SUAVs) easily.

Not really. It just means the FAA has to come up with rules just like for any other flying planes. No one wants drones flying into protected airspace, crashing into things, etc. It's not a sudden free for all. They just can't refuse to come up with rules because it's a drone. The rules could still be hard to meet.

This sounds similar to the UK's CAP722, which is the basis for commercial UAS flying over here. As jccooper said, the immediate industries affected by this will be those which gather imagery or video.

In the UK, there is a restriction against flying out of line of sight. Most operators will fly using a GPS lock for stability but will not be using video streams from the aircraft for anything other than framing shots. Here, commercial UAS are often used as a low-cost, faster alternative to scaffolding; I recently got up a 4am to help an operator survey the exterior of an old hotel in the centre of a large city, looking for damaged pipes. He did the entire hotel in the course of three 3 hour sessions.

I think the FAA may lose the appeal but we will have to wait and see. This is really political for the FAA; they want to be in control of this area as much as possible so they will fight very hard to get the lower court ruling overturned. There is a large heated discussions thread on DIYDrones.com [1] if anyone is interested in opinions of folks with boots on the ground so to speak.

After reading the pdf, I'm still not clear: does this mean UAVs can operate in regular airspace (above 400 ft) with proper licensing, or are they still (for now) limited to under 400 ft, and away from populated areas?

The government has some pretty big restrictions on binding regulations from agencies. Restrictions that make a lot of sense when you realize that most law is made by elected officials with constituents, ie representative democracy. Regulations aren't made like that, so you have a lot of rules to prevent abuse.

One of the restrictions is you need to have defined periods for public comment on new regulations. Which the FAA did not do, as I understand things. It just came up with new rules without following the defined regulatory process. If for no other reason then that, the UAV rules were invalid.

Let's stick with some facts (I'm a pilot and work closely with the FAA for my dayjob so I know a bit about this space) ...

- The FAA is responsible for the safety of U.S. airspace from the ground up. This misperception may originate with the idea that manned aircraft generally must stay at least 500 feet above the ground.

- There are no shades of gray in FAA regulations. Anyone who wants to fly an aircraftmanned or unmannedin U.S. airspace needs some level of FAA approval. Private sector (civil) users can obtain an experimental airworthiness certificate to conduct research and development, training and flight demonstrations. Commercial UAS operations are limited and require the operator to have certified aircraft and pilots, as well as operating approval. ... The FAA reviews and approves UAS operations over densely-populated areas on a case-by-case basis.

- In the 2012 FAA reauthorization legislation, Congress told the FAA to come up with a plan for 'safe integration' of UAS by September 30, 2015. Safe integration will be incremental.

I have yet to see a practical solution that provides separation between UAV and manned aircraft. All flights under VFR are responsible for their own separation by way of see-and-be-seen. I have yet to see a UAV that can visually recognize an approaching aircraft and take evasive action.

This is simply awesome news for startup/small business UAS operators in the US in the short term. I do have some concern, though, that this could cause the FAA to now rush the rule-making process which could result in half baked, ham fisted regulations. This is especially possible if we see several high profile accidents during this new free-for-all period.

Cheap plug: if you want to play with aerial photography but are more of a software guy than a hardware guy, check out my embarrassingly buggy side project at http://airboss.io. It's an app that lets you use an old Android phone mounted on a drone as a photography/video platform with real-time first person view streaming using WebRTC.

What if I got 100 drones that picked up a tarp and carried me around (i.e. use them for transport, claiming that they carry the tarp and I just happen to be on it)? Or how about using drones in mass by a non-profit to intimidate at political gatherings?

I like drones, but I have a feeling this is a decision that will get overturned within 5 years.

I am not a pilot. But I imagine that there should be not problems flying a piloted unmanned aircraft in uncontrolled airspace--which is, I believe, usually under 1200' AGL. If you're operating remotely without a camera, you keep the craft within line of sight, and double the visibility distance for a manned aircraft. Otherwise, you operate by IFR using whatever telemetry you get back.

With a drone, however, I'd think that operating in controlled airspace would require extensive collision avoidance and fault recovery software, which would have to be tested and certified by experienced pilots.

There is a difference between drones and remotely piloted aircraft, and I certainly hope that the journalists can learn it before we end up with another "hacker" situation.

Even if you were to scope it just to software/SaaS product companies, there's minimally hundreds of these in the world and dozens of them have HN accounts. Most don't post on threads like this, so I feel the need to pipe up and say "This is quite doable, and done, much more than you might expect."

I run a business selling penetration testing software that I develop. It's completely bootstrapped. I do very little services work (I actively send this type of stuff to friend's companies). Right now, it's just me, although that's probably going to change. By most of my own definitions and the one you posted here... it's successful.

How did I get started on this? Sort of by accident.

I was working for Automattic after an acqui-hire thing. After a year there, I found that I missed working in security. I found a full-scope penetration testing gig three blocks from my apartment.

In my spare time, I started to tinker with a few ideas and released them as an open source project. Said project saw a lot of interest within the hacker community very quickly. I didn't expect this. Folks formed an opinion on it pretty quickly. Some people hate it. Others love it. Of those who know it, very few are in-between.

I left my pen testing job with a decent amount of money saved up. I didn't know exactly what I would go and do afterwards. I spent some time tinkering with Android, just for giggles.

I was very reluctant to start a business that used my "successful?" open source project. Partially because it leverages another open source project owned by another company.

I was at a conference in 2011 and someone from a US government agency asked if I was selling anything. I said no. He said that was too bad, because he had end of year money, and he liked my open source stuff. It was then that I decided to look at expanding my open source kit into a commercial product.

April will mark the two year anniversary of my first customer. My customers are well known organizations and they trust my software to assess how well they protect their networks. I'm constantly in awe of this.

I'm doing http://justaddcontent.com solo and self-funded. It turned into a bigger project that I anticipated, especially for my first product.

I started working on it full-time in October. It started as a hobby project about two years ago. It took particularly long because I have a non-technical, military background and had to teach myself to code, design, write copy, marketing, etc. It's been a fun challenge.

I still work on it 12-14hrs a day on average, but I still love it and I love the problem I'm solving. The last few months I started focusing on product again and my customers absolutely love it, which is awesome. Now I'm turning my attention back to marketing.

Like the other guys, I'm not making millions yet, but I'm 100% self-funded and in no danger of running out of money. I continue to put 100% of what I make back in the business after my essentials.

I'm not sure when I'll start hiring, but I have some pretty major plans that I'll need help executing. It's just one of those things where it'll completely change the game, but it'll also change the dynamic of the business.

Five figures a month, just me, I've written about my solo business a couple times in other Ask HN threads. Ten years ago (almost to the day), in my college dorm, I was looking at the Webalizer web stats report my web host provided for my blog, and thought "I could do something much cooler than this". So I did. I had built a few educational sites and threw some ads on them for a couple years before that, but W3Counter was the first service I actually charged a subscription for, and now I make a living building and selling this stuff.

I run a small one-man-show publishing house: http://minireference.com. I produce math/physics textbooks for adults. I'm the author, business person, marketing person, and strategic partnerships person. Revenues are not stellar, but they keep me off the streets...

The value I provide is synthesis of a lot of educational material that exists out there into a coherent package (a book). In many ways, my work is similar to what linux distro package managers do: ensuing prerequisites are covered before the main package is installed.

I remember hearing one of the early Internet/www inventors saying the Internet will allow people to "live from the fruits of their intellectual labour." Does anyone know who this was??? With eBooks and print-on-demand this is finally possible now. I would encourage everyone with deep domain knowledge about a subject to start writing about it and publish a small book. I think "information distillation" is of great value for readers. Feel free to email me if you need help/advice with the publishing stuff.

It just crossed 500 paying members. I started it with my wife, but recently the shipping part is no longer done by us two manually, but by a local supermarket. In the beginning it was just an email to some previous customers asking if they might be interested in a club like this. Then a landing page and a HN post. From there it grew through blog mentions and now there is a trickle of organic traffic coming in.

Before this I had some small apps on social networks that made more money, but were much more unstable. While Candy Japan could wither away, I expect the death would be more gradual. I still have some of the older sites / apps which together are still making around $500 / month, which is a nice bonus.

Probably anyone working as a salaried programmer in the US is making a more money than I am currently, but I enjoy the freedom and the thought that there really is no upper limit. If we ever do hit 1000 members I'm planning to have a celebration :-)

I'm an avid cross-country skier, and traditionally daily trail reports are done by hand by the maintenance staff after they're out all night working on the trails.

I had the bright idea of putting GPS tracking devices in grooming equipment and creating the "what's been groomed" report automatically, in real-time.

It took about 4 seasons to really get it right, and there was no appreciable income for that period. Lots of lessons learned about equipment (antennas, good wiring practices in vehicles, power cleanliness in big equipment, etc), good ways to present the data, map projections, how to deal with messy data, dealing with non-technical users, cross-border shipping tarrifs, mobile-network provisioning rules, the list goes on. I did it alongside my full-time job for the first 4 years.

It's a tiny niche, and one I never expect to get all that big, but it looks like I'll be able to make it my sole income source next season.

1 man startup - http://reviewsignal.com/webhosting/compare I do web hosting reviews. Not the scummy pay-for-placement stuff you see, but an actual review site. It tracks what people are saying about hosting companies on Twitter and publishes the results.

The story is told a bit here http://techcrunch.com/2012/09/25/web-hosting-reviews-are-a-c... I was just tired after 10 years of still relying exclusively on my experience and the experiences of people I knew. Figured there must be a better way and I had been working with Twitter data for thesis and saw this opportunity.

Solo, self funded and profitable. I work on it while traveling around Asia.

Agree with patio11 there's probably way more than would speak up here. I seldom contribute to HN or the bootstrapping forums mentioned in another reply. I browse a little, but 99% of my time spent in front of the computer is spent working on product or replying to customer emails.

How I got started:

I've built SaaS apps before but they were the dreaded "solution looking for a problem" type.

Then I decided to do things strictly the Lean way. Got out of the building. Talked to customers about an idea I had. Pretty soon I discovered an adjacent problem that everyone had, that sounded fun to solve, and that I had specific domain knowledge in. I built and launched my MVP in one month, from a beach in Koh Samui. I've been traveling ever since then, spending each month in a different country.

Charged from day 1. Had paying customers from day 1.

I find changing my environment enables me to compartmentalize my work better - like I try to get major new features rolled out before I head to my next destination.

Not planning on doing this solo forever. Not ruling out hiring some help down the line and maybe a permanent office somewhere.

* Textbooks Please: a textbook search engine for college students. It's grossed ~$20k, almost all of which has been reinvested, and not paying myself that much.* dbinbox: an inbox for your dropbox for receiving files too large to email. It's got ~25k users, but has made less than $1k in donations. I need to give this one a reboot soon.* Email Tip Bot: send bitcoin with email. Launched two weeks ago and I've already got my first 200 users :D

I really enjoy the process of making these kinds of things, but I find it enormously exhausting to do the other half of marketing, SEO, publicity, etc. I'm working on getting better at SEO, but would love to find someone that likes the marketing side.

I don't know exactly how you define "successful online business," but I am currently a university student making $500 - $2000 a month at about 5 to 10 hours a week.

Basically, there is a market for vintage computer hardware, so I post some adds offering to take away old office items they can't just throw away. Such as old keyboards, terminals, etc. and they pay me a nominal fee ($1 - $5 per item depending) to rid them of their "trash". I then resell those items after cleaning them up a bit for extremely high profit margins $35 - $120 for 20 minutes of work (since I was payed to take away the trash).

Another way I make money is by tutoring or helping out with programming, I use to help out local people, but I have since switched over to Google Helpouts. Usually, it's just explaining some algorithms and writing some C code. Pretty easy, no real upkeep, and I can set what ever hours I want.

You can have a successful online business with one-person, but there will always be a maximum to the amount of money you can make and it does not scale well.

I ran a 1-person company for the past 3 years (B2B SAAS). I now have 2 other partners in the company to pick up the slack and we will be hiring a few employees next month.

It's difficult to: maintain your current business (IE: new features, bug fixes, customer service) while at the same time, trying to get new business (marketing, new ideas, planning) and also have any kind of life outside the business.

You also won't be able to go on any kind of real vacation and time-off is challenging. I didn't think about these things at 20, but at 30, it's starting to become more and more important.

Just launched Pinegrow Web Designer (http://pinegrow.com) two months ago. The company is actually run by my wife and me, but I do all the work with Pinegrow while she is taking care of our other projects.

Pinegrow has been paying most of our bills since launch and I have a lot of expansions in the pipeline: full support for Foundation alongside Bootstrap, developer edition that'll work with templates, a similar app for designing emails...

I'm not sure he is particularly active on HN but Rob Walling[1] is a solo entrepreneur managing at least a couple of Saas products: Hittail[2] (which he bought and then grow) and Drip[3]. He also conducts a podcast on Saas[4] and also organises a conference for self-funded startups[5]. In the past Patio11 spoke there too

Hello from Quebec,I am on Hacker News, as a big reader not commenting. My online business, is profitable, it make all my income, an ok salary for me :-). I have read the book '4 hours week' and work only few hours a week. The business start with a shareware game (1990), quit my day job (2002) to create more sharewares, fail at the first one (the password/unlock was hack the first week). So I come with the idea of having a client/server game (2004) (harder to hack). That work enough to make a small salary. Then I build another client/server game (2011), almost the same as the first one but localized in 3 languages. Then I received a lawyer letter (2011) to close both of my online sites. I did make some modifications, after 2 years they leaves me alone... Being afraid of closing, I was looking for a plan B (2012), I works hard on web sites that have a lot of visitors to make money with AdWords, it works. Now half of the revenue come from the 2 online games, and the other half come from AdWords. The shareware, online games and web sites are all related to a very popular crossword game.

Since you didn't specify the type of business ( http://www.dot-com-it.com) ; I run a consulting business as a programmer that is just me. In the 'early' days (I started in 1999) I sort of fell into it. I was burnt out and walked out of a job.

Networking got me 2 consulting client, and things just ballooned from there. I fell into it accidentally and learned a lot of hard lessons along the way.

In the early days I did a lot of fixed fee projects for small businesses. On some projects I made tons of income; and on other projects I put in a lot of unpaid time [due to me incorrectly bidding the project and/or improperly handling change requests.

I tried my hands at podcasting with a sponsorships model ( http://www.theflexshow.com ). That made ~$30K throughout its' life. Even though I had a huge audience for the size of the market, no one was trying to sell anything to that market. I make ~$30K or so throughout its run and gave away a bunch of sponsorship in exchange for other services. Not bad; but not enough to pay the bills.

I took the profits from the consulting business and pumped them into a product business selling advanced components to Flex developers ( http://www.flextras.com ). I did this full time, stopping all consulting. The business was, in essence, a failure. It generated $10K per year which is a nice side income; but not a "pay your bills income. The business was slowly growing, until some Adobe PR mishaps killed executive confidence in the Flash Player as an application development platform; which killed our sales. It was generating about $10K per year and was growing. But, that is not enough to pay the bills. I shut it down and open sourced all the code.

Now; I'm back to consulting, however the bulk of my clients right now are hourly as opposed to fixed fee. This is very profitable because many clients just keep renewing contracts and giving me work. However, it is the least bit satisfying because there is no defined end point and it feels like I'm just churning my wheels to kill time. Sometimes it feels like clients are creating just enough work to keep me busy so that I'll be there when they really need me.

Despite having multiple ongoing clients; it doesn't feel like I'm a business owner because they are paying for my time, explicitly. That isn't scalable in any way.

I'm prepping to launch a book under the Nathan Barry's "Authority" model which will teach Flex Developers how to program in AngularJS. More info at ( http://www.lifeafterflex.com ). People seem excited about this beyond anything I ever expected. I asked my newsletter if anyone was interested in reviewing a pre-release copy and I got 20+ responses which is a significantly higher response rate than usual. If the early interest is any indication; more people will read my book than ever bought a Flextras component.

I run a small business called Cram Fighter (http://cramfighter.com) that is targeted at students (mostly medical) that are preparing for standardized exams. I got the idea after watching my wife preparing for her board exams and it seemed like a perfect little project to learn iOS programming. Initially my goal was to do earn maybe $5k annually, but now I'm on track to surpass my salary as senior developer by next year.

You'll find a lot of one-person businesses targeting tiny, but profitable, niches like mine. What's great about it is that often when you find a tiny opportunity, it opens up a lot of other problems that need solving that you would never find otherwise. It's also a great way to learn the skills of running a business in a relatively stress-free way (at least compared to running a startup).

The only downside is if you're anything like me, you'll get antsy working on small projects and yearn to tackle bigger, more ambitious problems. Sometimes 1-person companies have the potential for turning into a company with startup-like growth, sometimes not. I'm still trying to figure out how far I can take my company.

You should check out the SideProject Book[1]. It's specifically about bootstraped, successful, single owner projects. It features some of the projects that appeared here, actually, like BCC.

As of myself, I am currently trying to educate myself into dividing my time better between my "day job" and my product. Hard to do, though, when your day job absolutely rocks... It's very easy to work all day long without realizing you should have stopped in the middle of the afternoon. Not a bad problem to have, mind you.

I am running inBoundio - http://www.inboundio.com I call it basecamp of marketing) and is the only guy so I do all the work.

I am really not that worried about slow growth or not making much money, I am enjoying what I am doing, works on average 6 hours a day and can spend rest of my time on learning new things and thinking about life and philosophy.

I started because I felt that market (and so do I) need such simplified software. Most of the options were too complicated and were very costly.

I will soon be touching 100 paying customers so will write a detailed post and share it on hackernews.

I've worked goffconcepts.com full time since 2003 entirely alone (the "we" is my wife). My new product FileSearchEX is highly pirated so I'll probably be moving on to other things. I only recommend SaaS, forget about fat client software. The search engines enable a very toxic landscape otherwise.

Its a traditional desktop app (windows, mac), but only sold online via our own website or the mac app store. I created it about 4 years ago, and work on it solely in my spare time. In fact I'm employed full time at a major tech company but this I keep separate.

To claim its profitable is a bit misleading, because of cause the major cost in developing such software is my own time. I've incorporated as a limited company here in Finland but do not pay myself a salary, so the only costs to the business are web hosting and occasional hardware purchases (computers, cameras).

I started this as a project for personal interest; at the time I was working as a software engineer developing financial trading software. Smart Shooter was a good way to develop something that covered both my interests in graphics programming and digital photography, to alleviate the borebom from my day job.

So for me its been successful, its still an pleasureable hobby, allows me an excuse to play around with the latest cameras, and brings in some pocket money. It doesn't generate enough revenue that I could quit my main job, but the possibilities could be there if situations change.

A website generator and a login/fetch user info/filter service attached for brazilian firms/hotels/inns that offer rooms to rent for long periods/temporary housing.

A very niche market in which I fill myself inn, was developing something just for me and got the idea of offering it to others. I have just one client (so I don't qualify to "successful"), but I'm following patio11 advice of offering services do niche underserved markets. Does anyone has any advice?

My biggest problem is how to market to such a niche. I'm trying to email people, but the people in niche are really non-computer users, so it is difficult.

I like reading about stories like this (from all the successful solo founders).

It proves that you don't need to get multi-million-dollar valuations to be successful and that the general entrepreneur is pretty content with the amount he/she is making (+1 to thousandaire).

These are stories entrepreneurs (who are realists) should read about and we'd probably all be better off avoiding those "billion dollar acquisitions" (for fear that it will consume us mentally, physically and emotionally).

I live entirely in the consulting space working on SEO/SEM and content marketing, and have done so alone since 1997. After a few pivots in web design, hosting and domaining, I've ended up in a place where I can be picky with clients and charge good fees. I hit the website development market on its first big wave, and moved into search before 99% of other SEOs even knew the discipline existed (and before some were even out of elementary school!)

I make much more than a full-time employee would, but there is a lot of bosses and stress comes in waves. I have learned how to fire clients (hard to do) and how to size up opportunities. But the company is me. It's not saleable - no intellectual asset exists beyond what I bring to the table. So it's never going to have an "exit." This is my main gripe.

Also, I relocated to Kentucky after writing tons of code in The Valley and ended up in Lexington, KY - a great university town with a highly educated population (and a Google eCity.) This has offered me a nice lifestyle, plenty of time to raise my kids and material rewards for probably 60% of Santa Clara's cost. My company is at http://www.buzzmaven.com. Good luck!

I run a small online game called RPG MO(http://rpg.mo.ee).It gets about 20k unique visits every month. Money wise it generates enough payments to pay for the server and associated services and leaves a little for advertising as well. It doesn't cover all of my bills yet so I have to maintain a full-time job while studying in the university. I still have high hopes in this project though.

Currently i created a member management site on www.ledenboek.be/EN for sport clubs, i'm still improving this. But it's also a test for marketing and gaining clients.

Next up is Surveyor ( a email/sms marketing web application which i used myself (not public yet). I'm going to use it first for clients who ask me to create their website.

But the big one is a document management system, that is totally different from the existing onces. I already have interest have a company with +/- 100 users and we are setting up a small demo in April there (ps. This will hit Hackernews in about 2-3 months)

I sell Asterisk reporting sw for win @samreports.com. It makes about $1000 a month in revenues. I also work as iOS developer for the man. I have a free iOS app on the AppStore (HRTecaj), soon to be commercial, when I add ATMs. I was Asterisk integrator, and learned a lot about the system, made software to present call reports in customisable and pleasant way. SAMReports has been selling, consistently, for 4 years. I made a few updates, but now I'm working on a major update.

I've worked full-time as a consultant since 2007 and make a little over low six figures after taxes and paying contractors. We (http://www.goodproduce.net) do a lot of basic services like content development, web design (mainly WP), social media management, hosting, deck creation, and general "digital" consulting for high net-worth individuals (primarily athletes and their brand partners.

In the early days, I worked to stay visible through conducting interviews for my company blog - that got us on the map in the sports community. It also helps that we never say no to a request...ever.

I've been running http://www.vladstudio.com (where I publish my wallpapers and other stuff) for several years, and for quite some time, it was my primary source of income. Unusual, because my premium accounts are not really a "product", but just a way to "like" or "donate".

I run http://flevy.com. It's a marketplace for premium business documents (e.g. business frameworks/methodologies, financial models, presentation templates, etc.). I do contract work out via odesk/elance from time to time.

hey. The good thing about a one man operation is that you dont have any overheads. I started my startup with 4 people, but later learned that one was enough to start off with and i should scale according to the profits im making - instead of putting my own money into it (which i didnt have much of anyway).My startups: (http://opensource.com.pk) and (http://sells.pk/) are web agencies specializing in different areas. The former provides managed freelance outsourcing for larger projects and the latter specialized in e commerce for small/medium businesses.The first few clients help you pay the bills and buy bread, but if you keep at it and be persistent after a year you will have more clients than you can handle, that is the time to get employees. I am almost reaching that point, and that is what excites me these days.

I run a web design and development studio that is just myself, although I established it as an LLC, and have been successful with it. I focus on WordPress solutions and started it by just diving in head-first and work pretty hard at it. I have a marketing background which helped me get it off the ground quickly, and am good at managing time, which has helped. I totally love it.

I spent a number of years wasting time on IRC looking at and chatting to people about techincal issues, plus a bit of humourous banter. These days the all of my income comes from activities where IRC is the main means of communication.

This seems quite nice. Two things I'd love to see, which would make it even more handy:

- I'd love to see each license that has a standard SPDX identifier (https://spdx.org/licenses/) include that identifier as metadata in its tldrlegal record. (And ideally tldrlegal.com should have a standard URL to reach the one-and-only license corresponding to a given SPDX identifier.)

- I'd love to see standardized tags for OSI-compliant Open Source licenses, GNU-approved Free Software licenses, GPLv2-compatible licenses, and GPLv3-compatible licenses. (For the latter two, it'd be interesting to have a generalized tagging mechanism for saying "compatible with (other license record)", but in practice those are the two cases that matter most, and too much generality might not be a good thing.)

Also, this site seems to be doing something with icon fonts that doesn't work: I see missing-character glyphs for characters F098 and F099 where Facebook and Twitter icons should appear, and an "fl" ligature where a search button should appear. (Firefox 27 on Linux, in case that matters.)

I also don't see the placeholders on the username, email, and password fields in the form that pops up when clicking the "Sign Up" link; the placeholders in the form on the front page do appear. (Consider using real labels rather than placeholders for forms like that, to make them more accessible.)

Hey guys, tldrlegal creator here. Just a quick note -- I am really glad to see discussions about license interpretations here, this is exactly what I hoped for when I first started the project. As a reminder on tldrlegal you can give feedback using the black tab on the right and also suggest a change to any license page by editing it when you're signed in! My biggest goal is to make sure content on tldr is of the highest quality and I'm really grateful that you guys are taking the time to critique and raise questions about the ways things have been summarized. Many of the summaries are outdated, and the best way for you guys to get your thoughts integrated is to use the editing features on the site! Tldr - you can edit license content on the site!

I hate to be a party pooper, but I think this site is completely nave from a legal perspective, and actually harmful. There are many ambiguities not covered by simple bullet point descriptions. Ask yourself, why are licenses long in the first place?

I defer to the IT / Digital Legal Companion, by Gene K. Landy, an actual lawyer:

"The Myth of the Two-Page Contract

... Many times a client has brought us a complex multiyear distribution deal and wanted a 'simple two-page agreement.' Sometimes, they even ask for a one-page deal! The ostensible goal is to cut down negotiation time. But for most negotiated deals, it is a myth that adequate contracts can be short in this way. If the lawyer tries to comply (and we have tried), the result will almost always be to make the deal ambiguous....Managing complex deals full of contingencies with as imprecise a tool as the English language is tough enough; it is not prudent to try (or force your lawyer to try) to make agreements more risky than they have to be."

The sentiment could not be more clear: all that legalese is in the agreement for a reason. "Simplifying" licenses into bullet points does not adequately capture their content, and it is a huge UI fail to make people think that it does.

Let's start with the site's treatment of GPLv3. You get a few bullet point labels about e.g. source code disclosure plus a three-sentence summary. "GPL v3 tries to close some loopholes in GPL v2." Oh, really, what are those loopholes? No discussion of code signing, patent rights, affero vs. non-affero, among many other issues.

Even the bullet points themselves do not adequately capture nuance. That the GPLv3 allows commercial use is technically true, hence the label "commercial use". However, what isn't mentioned is the fact that the GPLv3 separately prohibits practically every known business model for profiting from software.

These ambiguities are not minor issues; they are fundamental. Let's not wrongly give people the idea that complex licenses can be summed up with a few bullet points, any more than our technical work can be summed up with a few crappy analogies!

It starts out with, The ISC license is not very well regarded because it has an and/or wording that makes it (debatably) legally vague. Um, really? There is only one person who thinks that: rms. The ISC license isnt the most popular permissive license, but it is reasonably common, being used by several large organizations (like, well, the ISC), and of course by a number of people for personal projects.

So aside from it not being copyleft, why _does_ rms (and thus the FSF licenses page) dislike the ISC license? Well, the ISC and Pine licenses both contain the following wording:

Permission to use, copy, modify, and distribute this software is hereby granted

University of Washington apparently tried to claim that this disallowed distribution of modified versions of Pine. Of course, that interpretation is completely ridiculous, as evidenced by 1) the reaction on debian-legal https://lists.debian.org/debian-legal/2002/11/msg00138.html and 2) the large number of people who use the license to this day.

But because _one_ copyright holder interpreted those words in a way that nobody else does, the FSF has claimed ever since that any license using such wording is dangerous (not nonfree, just dangerous). Which is silly. Anyone can misinterpret a licenseI have seen several people release code under the GPL and then try to claim that prevents commercial redistribution. Would the FSF list that as a reason not to use the GPL? Of course not.

Anyway, the thing that bothered me the most about the TldrLegal entry was the statement that its not very well regarded. More accurate would be to say not well regarded _by the FSF_, but even that wouldnt paint the whole picture because it doesnt account for organizations like OpenBSD or ISC who actively prefer it.

By the way, heres some interesting reading from Paul Vixie about the history of the ISC license: https://groups.google.com/forum/#!msg/comp.protocols.dns.bin... In response to the noise made by the FSF, the ISC changed the and to and/or, though of course that didnt change the FSFs mind at all. OpenBSD still uses the and wording.

At the same time, I wonder how clear the descriptions are, particularly of the permissive licenses (this is a critique, not a teardown). I looked at the FreeBSD and MIT licenses because of the fact that there is a subtle difference in the licenses which can cause some confusion: the MIT license explicitly allows sublicensing while the BSD licenses do not.

The descriptions were good. The MIT description was exactly the way I read the license text. The 2-clause BSD/FreeBSD license said "you can do almost anything" which immediately raises the question of "what can't you do?" My reading of the license is that the answer is "sublicense" (i.e. you can include the work in product of a different license, but you cannot change the license on the BSD code in the process of transmitting it, and that this limitation does not exist in the MIT license, were you can not only change the license but likely assert status as licensor when you do so). IANAL, but this is the sense I get from Larry Rosen's book ont he subject.

Again, I don't know that licenses can be perfectly explained in plain English, so this is a fairly minor concern if the site exists primarily as a conversation starter rather than something that people want to use as some sort of authoritative reference.

Now, I just glanced at the thing, and the very first thing that annoys me is this: Why would you require an account for submissions? Or rather, why would you need to have any kind of user accounts mechanism at this site?

The second problem is that the combination of TLDR and legal. One should at least bother to skim through the license of a program they'll use. BSD/MIT are about 20 lines, and GPL3 is about 700 lines:

I met a lawyer at LinuxCon Europe last year. One thing interesting she said was that of all the other professions, a software developer was the closest to actually understanding legalese. That's because devs understand if-then-else, switch-case, variables etc.. That's exactly what the legal language style is.

I wish the Apache License was more popular. It's in the same spirit as the MIT License, but it provides more protections regarding patents, for example. This prevents the "bait and switch" method of licensing a work under a permissive license but restricting it under patent law.

1. The social links. Not only are they not necessary, but there appears to be a bug with the Facebook widget (Chrome 35, OSX) that causes it to be about 1000px tall and invisibly cover the entire "Newest" column, making all of them unclickable.

2. The expansion of the 'rules' sections on hover. Not only is that information not really useful in that format (tiny text, difficult to read, too terse to be useful), the expansion makes it difficult to click on what you want. It may be best to either omit them or leave them expanded by default.

> Describes the warranty and if the software/license owner can be charged for damages.

which reads awkwardly.

It's like someone was describing the category "Hold Liable" instead of what the BSD2 license says about liability. Most of the descriptions have this problem. Something like "The software/license owner can't be charged for damages or shortcomings." might flow better, though that implies having descriptions for both the positive and negative case.

edit Oh, normally the descriptions are hidden (with scripts enabled). So it's as if I asked "what is it?" by clicking and the description is the answer. That makes more sense.

As a software developer, I want people to use my stuff. Once over the hurdle of writing something that people might find useful, I have to choose a license. When it's a library, I usually choose something without much restrictions, like MIT or Apache v2, or else developer will just go look elsewhere.

However there is a project which is a full blown application, and I went GPLv3, because I want anybody to be able to use it, or fork it. However it would bother me if someone was using this project to earn money, as I personally forfeited deriving any income from this project (not even donations). Apparently GPLv3 allows commercial use (I didn't realize that when I picked the license).

Software licences for dummies - hell, I'm mostly clueless about this so I had a good look.

One point I will make is that as soon as you read the relevant material on this site, you have to go into a dark corner and read the small print.

I've noticed the odd comment or two on this page mentioning lawyers. A bit expensive, but they're the most qualified people to talk to, bar (sorry, bad pun) judges.

At college, I used to live with someone training to be a barrister. Whereas the average thickness of my maths and science texts is, say 500 pages, I saw one book on law hitting around 2,500 pages. It ranks alongside the London Knowledge for verbosity.

Looks promising. Not sure what I think of the concept of a 'Manager' (user who was the original submitter) being the only one to approve changes, seems a bit odd considering the rest of the community aspect.

A similar site, tosdr.org, was launched a couple years back to help filter through website TOS. It lacks a community editable system, and even with various publicity and funding still has a limited set of sites in their index.

Hopefully this new site can gain more traction and continue to develop.

I saw this linked somewhere else months ago and keep forgetting about it. Extremely useful if you really don't want to read through a large license file to see if you can use a library in a project or not.

what is the best license where you maintain the brand name of the open source software, and allow end user to modify it according to their needs but not release it under a different brand name OR use it to run commercial services off the software (that will require commercial license) ?

On the game I'm currently working on, it's built very heavily around Lua. So for the save system, we simply fill a large Lua table, and then write that to disk, as Lua code. The 'save' file then simply becomes a Lua file that can be read directly into Lua.

This is absolutely amazing for debugging purposes. Also you never have to worry about corrupt save files or anything of it's ilk. Development is easier, diagnosing problems is easier, and using a programmatic data structure on the backend means that you can pretty much keep things clean and forward compatible with ease.

(Oh, also being able to debug by altering the save file in any way you want is a godsend).

What's with all the XML hate? Of course, doing everything in XML is a stupid idea (e.g. XSLT and Ant) and thanks heaven that hype is over.

But if I want something that is able to express data structures customized by myself, usually with hierarchical data that can be verified for validity and syntax (XML Schemas or old-school DTD), what other options are there?

Doing hierarchical data in SQL is a bitch and if you want to transfer it, well good luck with a SQL dump. JSON and other lightweight markup languages fail the verification requirement.

Back in the bad old DOS days, instead of creating a file format for saving/loading the configuration of the text editor, I simply wrote out the image in memory of the executable to the executable file. (The configuration was written to static global variables.)

Running the new executable then loaded the new configuration. This worked like a champ, up until the Age of Antivirus Software, which always had much grief over writing to executable files.

I don't quite get Linus' problem with XML for document markup (for anything else - config files, build scripts - sure, XML is horrible). Does anyone know any more details about what his specific gripe is? For me, asciidoc (which looks very similar, conceptually, to markdown) suffers from one huge problem: it's incomplete. Substituting symbols for words results in a more limited vocabulary, if that vocabulary is to remain at all memorable.

Sure, XML can be nasty, but thats very much a function of the care taken to a) format the file sensibly b) use appropriate structure (i.e. be as specific as necessary, and no more).

What I like is the "I dont start prototyping till I have a good mental picture"

I am currently stuck on a project I want to start becasue I cannot get it to fit right in my (future) head. And I am glad I am not an idiot for not being able to knock out my next great project in between lattes.

(Ok, in direct comparison terms I am an idiot, but at least its not compounded)

> "I actually want to have a good mental picture of what I'm doing before I start prototyping. And while I had a high-level notion of what I wanted, I didn't have enough of a idea of the details to really start coding."

This I like. The race away from the waterfall straw man has also stripped us of the advantages of BDUF.

While rigid phase-driven project management helps nobody, I think there's still room for speccing as much as we can upfront within iterative processes.

Or you could run to the IDE and start ramming design pattern boilerplate down its throat the second you're out of the first meeting ;)

>>So I've been thinking about this for basically months, but the way I work, I actually want to have a good mental picture of what I'm doing before I start prototyping. And while I had a high-level notion of what I wanted, I didn't have enough of a idea of the details to really start coding.

This might be a tangential discussion. Earlier, I used to have a similar approach. Can't code until I have the complete picture. But, it's tough to do in a commercial world and you have deliverables. So, nowadays, I start with what I know and scramble my way until I get a better picture. There are times when that approach works. But, there have been days where I was like - "wish I had spent some more time thinking about this".

Worked on a project a few years ago where we needed distributed sync capability. Using git (or bazaar or mercurial) was one of the options - store everything in it versus a database. Interesting to see the same thought "coming back".

What is it with HN commenters and their demented ability to send topics completely of track? I would have thought someone might have examined the code or what Linus is trying to implement and comment about it.

But here we have threads about Lua, why people hate XML and love JSON and all kinds if irrelevant issues which have been well hashed elsewhere ad nauseam. Why not restrict to an analysis of whatever it is Linus developing?

Secondary math education, for me in the UK, didn't deal with anything outside of elementary algebra, Euclidean geometry, some statistics, and relatively simple calculus. Nobody talked to us about imaginary or complex numbers, or bayes theorem, decision theory, or non-trivial mechanics problems until I was in college (age 16+). Nobody mentioned matrices, broader number theory or discrete transforms until I was in university. I studied EE not compsci. Things like algorithmic complexity I had to learn for myself and from Knuth. I'm trying to grok group theory right now to help with my understanding of crypto. Before this, it was never mentioned throughout my education, so I don't know what courses you would have had to take to learn that. The fact that I didn't even know group theory was important to crypto until after I had made the choice strikes me as a bad sign.

The common theme at every level is learning cherry-picked skills, before you're even told what the branches of mathematics even are. Everything seems disjointed because you're not taught to look past the trees for the forest. Most people infact, even technical folk, go through their entire lives without knowing the forest even exists. Any idiot can point to a random part of their anatomy and posit that there's a field of study dedicated to it. The same goes for mechanics or computer science. You just can't do that with mathematics as a student.

I loath academic papers. Often I find I spend days or weeks deciphering mathematics in compsci papers only to find the underlying concept is intuitive and plain, but you're forced to learn it bottom up, constructing the authors original genius from the cryptic scrawlings they left in their paper... and you realise a couple of block diagrams and a few short paragraphs could have made the process a lot less frustrating.

So many ideas seem closed to mortals because of the nature of mathematics.

I currently teach math to at-risk students. I don't read all of these submissions about math education, but I skim the comments on most of them. The comments people make change the way I teach math.

I have always done a decent job of teaching math. I focus on helping students understand concepts, even when they are focusing on mechanics. I use words like "shortcut" and "more efficient method" rather than "trick" when showing students more efficient ways to solve problems. I have students do problems and projects that relate to their post-high-school goals.

But with the routines of school life, I get away from the fun of math from time to time. The comments on these submissions often remind me to go in and just tell stories about math:

- "Hey everyone, did you know that some infinities are bigger than other infinities?"

- "Hey everyone, do you have any idea how your passwords are actually stored on facebook/ twitter/ etc.?"

- "Have any of you heard the story about the elementary teacher who got mad at their class, and told everyone to add up all the numbers from 1 to 100? One kid did it in less than a minute, do you want to see how he did it?"

Thanks everyone, for sharing your perspective on your own math education, and about how you use math in your professional lives as well. Your stories help.

I've felt this is the case for a long time. A lot of people have a smooth experience in math for years until they hit their first serious discontinuity. That could happen anywhere: times tables, fraction arithmetic, two-step equations, geometric proofs, radicals, limits, or maybe even college math. The reaction is nearly universal though. The person thinks, "holy crap, I guess I'm actually not good at math", anxiety strikes, and they freeze up.

Some people find eventually find their way around this first road block, and future discontinuities in understanding become less stressful, and eventually understood to be a completely normal part the process.

But the usual experience is that a person's math confidence is blown and as the math truck barrels on ahead, they never catch up. They understandably accept the identity of not being "good at math".

What's missing in math pedagogy at most schools is a systematic way to deal with the discontinuities when they strike, especially that first time. We can prepare students to deal with that panic. The tough part is that the math teacher probably has 90 students on roster, but the discontinuity could hit pretty much any given lesson, for some given student.

I know so many people who have come back to intermediate math later in life and breezed through it, armed with intellectual confidence gained from other fields. They look back and wonder how they came to be so intimidated by math in their younger days. We've got to give younger people the tools and knowledge for overcoming this intimidation at a younger age. We've got to kill "I'm just not good at math".

The entire post was enjoyable but I found the last paragraph to have the most actionable advice:

Whats much more useful is recording what the deep insights are, and storing them for recollection later. Because every important mathematical idea has a deep insight, and these insights are your best friends. Theyre your mathematical nose, and theyll help guide you through the mansion.

I really enjoyed this because it captures so much of the frustration that felt early in my programming career - especially in college when I had classmates several years my junior who were (as far as I could tell) mathematics and programming wunderkinds. I also think that this is the sort of rhetoric that should be used to begin teaching children basic mathematics and more advanced concepts as well, because I still recall many of my classmates in elementary and even highschool who simply felt like failures or that they weren't smart enough to understand things because they didn't "get" it the first, or fourth, or fiftyth time.

This misses the dangerous part, which is mathematicians in groups can confuse each other into accepting ideas which are basically nonsensical, especially if the counter argument relies on some obvious but intuitive observation of reality but cannot be easily formalised within their chosen framework of the moment.

As a consequence of this it wouldn't surprise me if the overwhelming majority of maths was actually incoherent nonsense and that the people that understood this thought they were just very confused due to being shouted down all the time, when the really confused people are the ones oblivious to their own situation.

With both computer science and maths you are chronically confused. The difference being with computer science it doesn't matter so much if you don't understand something, if you can get it to work you know you are on the right track. Maths is much more progressive, each proof builds on a previous one. So if you fail to understand one step you are screwed from that point on.

After the first year I realised I didn't actually enjoy being permanently confused and so I ditched the maths to focus on computers. I do regret this. It didn't take long at all before I forgot all that knowledge I had spent years sweating over.

Fair enough. I tried in my youth to solve every problem I came across. There were many I couldn't solve. It took a while before I developed the wisdom and discipline not to solve every problem no matter how long it took. By a while I mean decades. I sacrificed the possibility of family life, have stopped talking to my uncomprehending stepfather, and have kept my social interactions to an absolute minimum to pursue my consuming interest. (I mention this as a point of pride.) I find myself continually astonished by the ingenuity of solutions I probably could never have imagined after years of work. Perhaps, after a lifetime of effort that must be continually maintained, I have attained the level an entering freshman at Harvard. At this stage, I may be reduced at best to connoisseurship of some aspects of mathematics.

Now for some reflections on attitudes. Mathematicians sometimes act as if they believe that expertise in mathematics transfers to expertise in mathematics education. Suppose you are a sensitive student, lacking in confidence. You open Korner's beautiful book on Fourier Analysis, and the first thing you are greeted with is "This book is meant neither as a drill book for the successful student nor as a lifebelt for the unsuccessful student." Korner does not mention other references suitable for the successful and the unsuccessful student. You take this comment to mean that Korner would let the unsuccessful student drown. There is no implication, but this is the psychological import, the implicature. Why mention the unsuccessful student at all? Why not say who the book is for, without planting this gratuitous image in the reader's mind? It would take some time to return to this book, to get past the wonder at a mind capable of such an incidental, dismissive, off-handed acknowledgement of "the unsuccessful student."

You could say this is "overthinking." Such remarks, microagressions as they are termed today, "perpetrated against those due to gender, sexual orientation, and ability status", are sometimes revealed in the asides of mathematical authors [1].

And now if only mathematics educators would evaluate their students on the state of their confusion!

I wish this post was around when I finished my undergraduate degree in Mathematics. I would have taken my adviser's advice to go to grad school. At the time, I remember telling him that I feel like a barely made it through the program. Apparently I wasn't alone. Amazing the difference 25 years and the internet makes.

"If youre going to get anywhere in learning mathematics, you need to learn to be comfortable not understanding something."

That's true of everything. It's fear and anxiety that prevents a lot of people from learning and trying new things. I keep trying to tell students or family members when they are learning to do stuff on the computer, just right click everything, just google anything you can think of, don't worry about it being perfect, don't worry about breaking anything. You have to hold back showing them the "answers" or else they become dependent.

I completely agree about the power of math, and why programmers should learn it. There are two problems with math:

(1) Math is IMHO the worst taught of all academic subjects.

It's taught as if it were not a language. Math profs and books on mathematics never explain what the symbols mean. They just throw symbols at you and then do tricks with them and expect you to figure out that this symbol means "derivative" in this context. I have literally seen math texts that never explain the language itself, introducing reams of new math with no definitions for mathematical notation used.

I've looked for a good "dictionary of math" -- a book that explains every mathematical notation in existence and what it means conceptually -- and have never found such a thing. It's like some medieval guild craft that is passed down only by direct lineage among mathematicians.

Concepts are often never explained either. I remember struggling in calculus. The professor showed us how to do a derivative, so I mechanically followed but had no idea why I was doing what I was doing. I called up my father and he said one single sentence to me: "A derivative is a rate of change."

A derivative is a rate of change.

I completed his thought: so an integral is its inverse. Bingo. From then on I understood calculus. The professor never explained this, and the textbook did in such an unclear and oblique way that the concept was never adequately communicated. It's one g'damn sentence! The whole of calculus! Just f'ing say it! "A derivative is a rate of change!"

(2) The notation is horrible.

If math were a programming language it would be C++, maybe even Perl. There are many symbols to do the same thing. Every sub-discipline or application-area of mathematics seems to have its own quirky style of notation and sometimes these styles even conflict with each other.

Yet baroque languages like C++ and Perl at least document their syntax. If you read an intro to C++ book it begins its chapter on templates by explaining both what templates are for and the fact that type<int> means "type is templated on int."

Math doesn't do this. It doesn't explain its syntax. See point #1 above.

That's what I tell people around me. Studying math is hard because it makes you feel stupid. You always feel lost, you always feel like you missed so many things when you're starting to learn a new thing, you always feel like your questions are stupid (until you get that the rest of the class is pointless as well).

Especially with talented professors (Lyon 1, France, the professors there are not really good educators, but they are geniuses), they make you feel bad for not understanding things that seem so simple to them.

I think this is a good read, although I don't agree with all of it - I'm of the mind that there is immense value in being able to figure out difficult proofs. The process develops your logical ability.

This is true with many, many things. Very often it is the connections between ideas that yields the deep understanding, not the ideas themselves. Focusing too intensely on a single idea or subject results in not making connections and, consequently, not really understanding.

It's strange to hear mathematics described more as a search for art and structure than computation. Unfortunately most of my math education was on the computational/applied side. I'm only getting into number theory and the more esoteric math later in life for fun. As a parent I think we can't let the school system destroy our kids love of math through too much rote learning. We have to make it fun for them. (Same with music btw

Reminds me of this great quotation, which Oksendal places before the preface to his stochastic differential equations book:

We have not succeeded in answering all our problems. The answers we have found only serve to raise a whole set of new questions. In some ways we feel we are as confused as ever, but we believe we are confused on a higher level and about more important things.Posted outside the mathematics reading room, Troms University

Reading good foundational text books carefullyis darned good advice. But for solving everyexercise before moving on, no, that's not agood idea. Instead, be willing to be happy solving some 90-99% of the exercises. For therest, guess, with some evidence, that they areincorrectly stated, out of place, just too darnedhard, or some such. If insist on solving 100%, then get on the Internet and look for solutions.

Next, if read some foundational text books, thenin each subject alsoread several competing text books, perhaps just one mostly but alsolook at least a little at the others for viewsfrom 'a different angle' that can be a big help.Why? Because likely no text book is perfect and, instead, in some places is awkward, unclear,misleading, clumsy, etc. So, views from a 'different angle' can make it much easier tolearn both better and faster.

His description of doing applications by justgetting what really need and forgetting the restcan be done but is not so good. Instead, havinga good foundation helps a lot. And, commonlyfor an application in an important field, therereally is some good material in that field thatshould understand with the application. Elserisk doing the application significantly lesswell than could have.

His description from Wiles is more or less okayfor doing some research but, really, not forlearning. And for research, more of a 'strategic'overview, i.e., with the 'lay of the land',would be good, i.e., for publishing notjust one okay, likely isolated,paper but a series of better papersthat yield a nice 'contribution'.

You know, there's a time and a place for quiet reflection. If the author needed time to reflect, he should go for a walk alone, not go to a party.

I'm a shy introvert, but I see this fellow's problem a mile away: he needs to get better at saying no to people. Sure it's important to be present for social functions, and there's an art to "making an appearance" that is just part of playing the game. But if you're consciously aware that you should be somewhere else... GO.

A room full of tech geeks will get this. Actually, they most likely won't notice that you're leaving. That particular neurosis is rooted in ego: part of you wants someone to notice that you're missing, so that you can be not just doing something important but be a hero about it.

Deep down, we're all frequently irrational in similar ways.

"I don't want to be alone, but I want to be left alone." - Stephen Fry

I love the writing, I love the anecdote, and I love the self-awareness it shows.

Almost just as much, I love the comments here - they're just so incorrigibly HN. Not everything is a problem that should be taken literally, dissected, and solved. The story is not the author asking for help with enjoying parties, relating to others, or troubleshooting bugs. It's a beautiful example of an internal monologue that shows not only how people approach social situations differently, but to what extent they think differently. I'm not sure the author needs any of the pseudo-analysis being offered to him (however well-meaning it is) - the writing suggests a lot more awareness of his perspective and that of the people around him than most of the comments rushing in with the most literal interpretation.

Either way, great piece of writing that seems to hold a mirror up to the reader more than anything - it clearly strikes a chord but each reader appears to be taking away something different.

My personal theory: For something really hard, you have to put in your time and think about the issue. It has to consume you. You can't hold decent conversations. Eventually that janitor in the back of your head wanders up to your mental whiteboard, looks at the problem, says "Harumph!" and scribbles down an answer, and you wake up in the middle of the night with the answer so obvious and a shriek on your lips.

But there aren't any shortcuts, and the janitor is not at your beck and call.

So, this guy went to a social event that he didn't want to go to because his mind was on work things and had been for a long, long time.

[edit] I'm taking the stance that there was a good reason for his going. Otherwise the question is just "why did he go?"[/edit]

Why wasn't he in the moment at the party? I suppose there's several reasons. I'm going to assume he was having a great time, someone that he was talking to left and then he wandered back into thought rather than going off to talk to someone else. That's fine, whatever. So, now I wonder where I'm supposed to go from here with this piece. The thing is this can't be about are getting flustered at people for interrupting your thought. It doesn't look like it. Therefore, I'm going to assume it's something like I'm doing right now: stream of consciousness. If that's the case, then, neat! I've been there! Very cool. Sorry it got awkward for you there. The other guy is a CEO, he understands being in thought all the time. "Just one sec, I gotta write this down" and then scribbling a bunch of notes wouldn't be too offensive to a man in charge of a whole company. He's done it plenty of times, and those ideas come at any random moment. I wouldn't be offended by a brief scribble before some proper salutations. After all, that CEO has now been given your undivided attention after about 5 seconds of scribbling (presuming you can write something short down that can be used to jog your memory). People like undivided attention. Makes them feel important, be they your boss, co-worker, friend, spouse or child.

Now, what is this story being used for. "I am not an introvert. I am just busy." No, you're not busy. Or, at least, your busy-ness shouldn't be with work things right now. You're at a party and should be in party mode with your friends. It's kinda like a father going home and saying he's going to spend time with his children, only to completely space out when he's playing catch. His mind should have been on his children. Your time is with your friends, there. Not giving them your attention is rude to them. "I am not socially awkward / going through the motions. I had a sudden thought I need to write down 'real quick" but that wouldn't be as catchy of a title.

There's those little notebooks that fit in front pockets that people buy and carry around. Maybe this is what those are for; or, as someone else in this thread pointed out, that's what the 'notes' app on your phone is for. I'd honestly not considered that is a reason, or if I have, I just re-realized that's what they can be for. Anyway,

We can take this to some other situations where it wouldn't be acceptable to be sucked into this train of thought: a meeting about a different feature at the job you're working at. They want you present on their tasks, too.

Live in the moment, be that completely absorbed in your current work task, or hanging out with your friends, laughing about stupid things, or hearing a friend talk about his story.

I would like to see this story rewritten from the point of view of a super hacker who likes to visit bug lists, track down the programmers assigned where they work, and psychologically hack those programmers into finding the solution. He protects his identity wearing only a hoodie.

> Are they actually talking to me? Unbelievable. I cant talk, Dan; cant you see? Im hanging to this idea by a thread as it is

My problem with this story is that it makes it sounds like it's Dan's fault for walking up to the author at a party/social event. We've all been there -- an all-consuming problem or an unexpected moment of clarity. Or maybe you realize something that puts you into a sour mood. We can't expect those around us to be mind readers.

Nope, sorry, you're wrong. You are the exact definition of an introvert. You're in a social environment, surrounded by other people, and instead of interacting with others, you're lost in thought about something else entirely.

Just wikipedia it: Introversion is "the state of or tendency toward being wholly or predominantly concerned with and interested in one's own mental life". That exactly describes the behaviour you're relating. Now, being an introvert isn't a problem, but that title tells me that you're no so much concerned about introversion being a problem, as you are about grandstanding and making your "busyness" mark you as important somehow.

I think the take away is that sometimes you can't see the forest through the trees, so look at the sunset.

When I'm working on something and I just can't figure it out, I take my dog for a walk and hold some ridiculous one-way conversation with him. Yeah I know I must look like a manic talking to an eight pound pomeranian, but the point is to put your mind completely somewhere else. It's amazing how the solution just appears when your mind isn't engulfed in the problem.

What doesn't seem to be coming through here is the power of the subconscious to pattern match and work on problems independently of our main heads. The way this guy figured out the solution to his problem was by observing another person's behaviour in an elevator, which matched a pattern in his mind.

aStimulus is often a good thing. I've actually found watching a lot of unrelated everyday interactions helps with designing systems.

My mother still recounts that at primary school (5-11) all I'd did is day-dream all day and they could never get me to do any work. That's not strictly true of course - I was reading "top class" books when still in infants and was never challenged by the maths we did, taught the teachers about electric circuits (perhaps they were just humouring me).

One of my favourite things to do is simply sit and stare out the window, or sit on the stairs but I'm always thinking about something. Always inventing something in my mind or doing some gedanken or other.

I wish this had been recognised as indicative of internal complex state rather than laziness and vacuity and then I might have been encouraged towards developing those thoughts properly.

I totally get this. As one who works in a coworking space I have to fend off people all day. I love to talk, but not when I'm trying to write code. Some people don't realize that headphones are the universal sign of "I'm busy".

Replace 'CEO' by 'my wife' and you have my life. It's difficult to explain apparently, but me sitting quietly in my underwear in the office chair staring at the wall on a Sunday morning doesn't mean I'm not busy. Those pancakes can wait.

Fantastic read. I'm not a fan of the labels introvert and extrovert as they apply to people. However, I think as they apply to behaviors they are useful, and I found the fact that you didn't advocate for yourself to get out of there (possibly even at the expense of being rude) to be a bit introverted. Same with the idea that you usually "blow it" during conversation. (Alternatively, perhaps I am asocial.)

This is why I used to carry around a pocket notebook with me at all times (nowadays I use the Notes functionality on my smartphone). Managed to figure out the solution to a problem I've been struggling with? Got a sudden flash of creative inspiration I don't want to forget? Need to remember to do some chore later on this evening? Pull out the notebook and pen, quickly write it down as best I can, then put the notebook back in my pocket and return my focus to what's going on around me at the time. That way I can put that idea out of my mind, returning to it when the situation is right.

I'm in a situation now where I might end up taking my first office job in more than a decade.

I'm actually kind of worried that the impact of having other people around will significantly limit my productivity. The few times I have gone to a remote office to do some work, I really suffered with the open plan situation.

I have to make a conscious effort not to be gruff and terse with my SO when he breaks my concentration, and I outright love him. I have to remind myself that other people aren't able to perceive what is going on in the virtual world.

Maybe it will work out fine, but in all honesty having to commute and be somewhere every day at a certain(ish) time is probably what's going to kill the experiment for me, not the other people.

I never understood why being an introvert is looked at as a negative, like the last thing you want to be is an introvert.

If this piece is this guy's inner monologue, he's an introvert. If he's incessantly focused on _things_, and people seem to be a distraction from that thing, that's a pretty big indicator of an introverted personality.

Extroverts can't _help_ but think about connecting with people. They thrive on it. They're people people.

I know a developer who once walked over to a business analyst to ask about an interesting ID in our database. She chatted with him for a bit, drew her question, then they bantered some.

At the end of it, he recalled the interesting ID that she originally queried about, as they conversed, told her, then as she passed on to her next endeavor, her closing comment was, "Now let me try to remember that number all the way back to my desk."

Now granted, this is just an ID not a bug, an "idea" or something embedded in some complicated (dis)array of logic but she engaged the person whom she asked, and through human interaction and grace, the two pulled out an answer together. Then she took it in strides that memory is up to her.

I remember when I used to be more like her and less like this antihuman, perpetually brooding, code-distressed, oh-can-you-leave-me-to-my-precious-mind sob-story archetype that you rage-hackers (again perpetually) perpetuate.

Why do we romanticize this? It's becoming obnoxious the glorification of obnoxiousness. Your mind is your garden, and no one owes you peace of mind. And if people wish to browse your garden, you should be absolutely fucking thrilled.

Why are you propping up and romanticizing this "do no enter" sign at the entrance of your gardens?

You know there's a Java dev here that often times will start off an interruption with, "How can I provide you with outstanding customer service today?"

How about this? Forget your "engineer" metaphor. Forget your "prodigious self-torment". If you want to fold to the Machines that's YOUR M.O. Stop sharing with piss about it. Stop whining. Our job has one distinct role, and that's to protect EACH OTHER from this massively complicated world of machines. Do your goddamn fellow human a favor, and pay more attention to HUMANS than you do machines.

Maybe your life will be filled with more spontaneity, warmth, and gifts because all a fucking computer is going to give you is rules. One key subcomponent of our job as developers, programmers, etc. is Customer Service and that's because humans first.

~(introvert || extrovert). Or perhaps a better definition is that there is no common mutex for introverts and extroverts. Or we're all a little introvert and all a little extrovert.

Introverts are powered by self-reflection and alone time spent understanding the things they love. Extroverts are powered by spending time with other people and reflecting on what they love. Most of us are a linear combination of the two. I'm about a 0.6i + a 0.4e (factors subject to change; some amazing people's factors add up to 1.0; warranty void where prohibited).

Introversion and extroversion aren't necessarily a dichotomy and aren't anything to apologize for. We partake of these modes of socialite as life permits. If you're binary on the scale, great - that helps other people understand you. If you're analog on the scale, great, you can help others understand you where you are currently.

You seem too anxious. I don't know about you, but once an idea occurs to me about what might be causing a problem, there's zero chance of forgetting it. You should have just enjoyed the party. After all, the ROI on the relaxation had just gone up significantly.

Jot some notes on the back of a napkin and the issue becomes less of one. The real problem, as others have said, is agreeing to things that you would rather not go through with... Now, that I can sympathize with. Not least because I struggle with it too.

It's so refreshing to see a service that requires no signup. Normally a signup acts as a barrier to entry for me, especially when it requires a link to a social media identity. I'm going to have a play around now.

There's a lot of nice things about this UI, well done! The difference between "animation" and "design" modes isn't quite obvious at first glance. The switch doesn't tell you anything, so you have to experiment to find out what it means.

I'm happy to see that you're using Canvas for the published animation. It's the only way to go for artistic animation where content needs to update at every frame -- CSS just isn't enough for that.

Some three years ago, I made a HTML5 animation tool that outputs to Canvas + video:http://radiapp.com

Despite all the hours I put in, I never managed to figure out a way to turn it into a real product. I think you will fare better :)

The tool looks great and everything was really well done, but who is the customer for this?

It seemed like it was for non-designers but when I open the editor I get a blank canvas and I have absolutely no idea where to start to make a professional looking animation. Seems like I would still need to hire a visual designer/graphic artist who would most likely use the tools they are already familiar with.

I'd love to try this out as I've been looking for something like for a long time. But I can't use it because I can't sign up with a password longer than 16 characters. This is a rookie security mistake that I see over and over again and makes me worry. As a rule of thumb, you should NOT limit the length of a password. Salting will take care of the weak password problem and a long password is, well, the securer the better.

Edit: use a secure hash on the password before storing into the database

Nice, yeah let's make a Flash IDE in a browser, which will compile results into a real .swf, which will then play in an HTML5-based Flash player on any device. Maybe support of a few things like rtmfp protocol will be tricky, but who cares.

Really nice work. A flash-like canvas animation is due gain popularity in the near future.

Your fill-bucket tool is a little freezy, as I'm sure you are aware. I made an HTML5 static image editor http://yangcanvas.com/paint and I found a good fill-bucket tool hard to implement too. Your tools does fill ranges perfectly, which is really nice. Good luck making that work more efficiently without sacrificing precision as I did.

Just had a short look at the video right now, going to check more, but looks good. Is there a way to embed the video in a blog post? Would like to blog about it and embed the video. When I right-clicked on the video, it had an option to copy video URL, which I did, but that results only in the link to the video, not the embed code.

This is very impressive. As someone who has no idea what I'm doing, I was able to make an animation of a circle moving and growing from left to right in 30 seconds without reading the manual or tutorials. This is how user interfaces should be built!

great find for me! i am in design/dev for the learning and development industry, so naturally many teams have strong backgrounds in Flash, and lately Adobe Edge Animate and Tumult Hype (even Google has thrown something into the ring). Literally the only thing I did to determine whether I would investigate Animatron further was opening it up and dropping some audio into the project, and seeing whether the waveform shows up on the timeline layer. This is a MUST regarding WYSIWYG sync of animation to audio... its what Flash IDE had that Edge and Hype still do not! The next thing is determining the workflow for integrating with eLearning standards such as SCORM and TinCanAPI / Experience API... naturally this is the type of product I would also pay premium to integrate further into my team and client's workflow for Instructional Design, etc...

Great - all the tools that come close to this have been flash and don't run easily on phones and tablets. I noticed it crashed mobile safari for me. Do you have plans to make this workable for non-desktop users?

I still use the tiny lightning fast Macromedia Flash 4 to make simple animations, when its just simple animations, without all the script power, features and Photoshop integration you get with the Adobe 2 Gb in disk elephant. I used to run this Flash 4 in a Pentium 133hz with only 16 MB RAM and it wasn't that slow. I was waiting, wondering why didn't someone bring something like that for HTML5 yet, and Finally it is here! Thank you .

Oh man, I hate this as much as the next person, but I thinkthe best thing we can say right now is that there is nowherenear enough (public) evidence that this is a believable allegation.What we have is an article from McClatchy whose title ends ina question mark (Probe: Did the CIA spy on the U.S. Senate?),which to me is the red flag of red flags that they have nolevel of certainty whatsoever. Then the article seems to drawdubious lines between this allegation and some questions inhearings. Other articles building on it imply additionaltenuous connections between all this stuff and a letterMark Udall wrote that may be referencing this vaguely, maybe.

It's a problem that all of this stuff has to remain vague.It gets in the way of our reaching conclusions. But assumingthe lack of information is information in and of itself isproblematic for me in this case. I think it's fair to waitand see what the justice department's investigation, if any,reveals. If there's no investigation, then we have to makedo with the information we have.

The fact that the CIA has been shown to be doing all sortsof terrible stuff doesn't mean that our obligation to beskeptical about allegations in general needs to be suspended.To me, it's likely that this is true, but I won't tout itas fact until something clearer than the current foggy tangleof vague statements emerges.

As a side note, I think the greater question to arise from thisis the fact that during a Congressional investigation, it wasthrough agreement that the CIA wasn't supposed to bemonitoring Congressional investigators. Why is that sort ofthing not clearly ensconced in law?

The thing I would be wondering in a Congressperson's shoes shoes: what other things have I been doing that the CIA has been spying on?

Chilling effects indeed.

And for those inclined to brush that away as implausible, it might be time for a refresher on J. Edgar Hoover and his secret files on political leaders. [1] 50 years later, our tech is a lot better, so it would be much easier to gain inordinate power through surveillance.

>Congressional aides involved in preparing the Senate Intelligence Committees unreleased study of the CIAs secret interrogation and detention program walked out of the spy agencys fortress-like headquarters with classified documents that the CIA contended they werent authorized to have, McClatchy has learned.

>After the CIA confronted the panel in January about the removal of the material last fall, panel staff concluded that the agency had monitored computers theyd been given to use in a high-security research room at the CIA campus in Langley, Va., a McClatchy investigation found.

>The documents removed from the agency included a draft of an internal CIA review that at least one lawmaker has publicly said showed that agency leaders misled the Intelligence Committee in disputing some of the committee reports findings, according to a knowledgeable person who requested anonymity because of the matters extraordinary sensitivity.

>Some committee members regard the monitoring as a possible violation of the law and contend that their oversight powers give them the right to the documents that were removed. On the other hand, the CIA considers the removal as a massive security breach because the agency doesnt believe that the committee had a right to those particular materials.

[...]

>While eating lunch during a visit to New Britain, Conn., with four New England governors, Obama was asked by a reporter if he had any reaction to the allegation that the CIA monitored Intelligence Committee computers.

>Im going to try to make sure I dont spill anything on my tie, he responded.

This just in: spy organization spies on important national political figures.

Did you guys never hear of intelligence agencies before Snowden leaked his docs? This is normal and expected. It's the reason intelligence agencies and spies exist. They're supposed to spy on the most important people in the world, and make sure that the important people don't plan anything the agency's employers may consider ... untoward.

Right, because the CIA has no electronic surveillance capabilities whatsoever. They can't cast a large dragnet over all of the country's communications like the NSA can, but they can sure as hell tap a few congressmen's phones and email.

How far does a government agency have to go in breaking law before the military is deployed to put boots on the ground to reel in the agency back under the rule of law?

I'd like to think that if agencies started hiring their own armies and created their own version of law enforcement zones from other countries, and started killing people who opposed them, that someone would actually do something to stop that... right?

Good stuff. Interestingly, the very first programming language offered in introductory CS class at my college was Scheme at the time (2000). The power of 'car' and 'cdr' still resonates in my head. At times, the parenthesis used to give me dyslexia but good old days of doing stuff like:

Somewhere, deep in Lindley Hall at Indiana University, is an old professor exclaiming, "I told them people used it!"

I wonder if I'd have taken more to Scheme if I were learning it now. At the time, I was double majoring CS and Telecom, but the world of open source hadn't been as friendly to Mac as it is now, and Macs were a prereq for TCom. Getting Scheme running on my old iBook was a pain in the ass, let alone the assignments (which still didn't match the untouchable stability of our automated grading system). I conceptually understood why I needed to learn it, and even grasped many of the concepts of what I was learning, but it wasn't the language for me.

Great work! :) Congratulations!For those who want to try developing in Scheme, I'm working on a project that could help you get started. Currently only Android is available, but as soon as possible iOS will be as well. http://schemespheres.org

It's truly inspiring to see a project like this completed. I've been wanting to combine mobile (specifically Android) and some sort of Lisp dialect for a while.

Am I interpreting correctly from some of the other components that doing the programming in a language-once-removed (ie Scheme instead of Obj-C) opens an easier path to compiling for both iOS and Android?

I play around with this a bit, a few things I learned is that XCode 5.0's llvm crashes when compiling Gambit-C 4.7.0's generated C code. The beta for 5.1 has a fix. Here are a few demos that might help a few people:

Very cool! I always like seeing when something is built using a typically non-traditional language for the environment. I downloaded the game to see how well it performed, and I gotta say, it's a lot of fun. Great job!

After reading the story I just have to recall stories from the guys who flew and serviced F4s, they joked they could fly without either wing simply because it was just a rocket sled.

We had one guy knicknamed Major Cablecutter as he "clipped" the guidelines of a radio tower one time. He also had come back more than once with branches stuck to his F4. Being that they were only "Recon" they tended to be aggressive during war games and this game guy over stressed his airframe turning into some F18s trying to tag him.

So many military planes have such high thrust to weight ratios I do not doubt that wings merely become the means to stable flight

This is also a great way to solve a problem. If everyone in the room is stumped, throw out a stupid solution. If nobody can improve on it, then the last solution wins. Works surprisingly well as most people can critique while finding it hard to create from scratch.

So, uh, what exactly is 'wikimedia'? How is it different from the straight-up 'Wikipedia'? This article seems like it should have been on Wikipedia, but it's on Wikimedia. I've never seen an article of this nature being hosted on wikimedia. What's going on here?

Relevant here: "I use a trick with co-workers when were trying to decide where to eat for lunch and no one has any ideas. I recommend McDonalds." By throwing out a "wrong answer," better suggestions are made.

I've made a conscious effort recently to make more mistakes (my apologies to the people on julia-users). I feel it's improved the rate at which I learn things. And I feel validated by "Antifragile" which I've just started reading.

(By chance I am also currently being tested for brain damage. It's bitterly amusing that I end up being unsure if I am actually making more mistakes on purpose or not...)

That's basically how I go with links posted on Hacker News or Reddit. Often I can save the burden of reading a terse article by reading the comments. There is always someone who has only read the title, who says something dumb; he is then quickly corrected by someone who did read the article and explain it thoroughly.

On the one hand, getting first to the comments is also good when the source is dubious. On the other hand, some article are definitely worth reading (which is usually easy to guess from the title or first comments), and it feels good to give back when you know what the article is about and can contribute to the discussion.

Sometimes you can get a solution to a problem by saying saying that after spending a bunch of time attempting to solve it, you've decided that a solution is impossible. The desire to prove you wrong is too much for some to withstand, and they go out of their way to provide you with a solution. (A consequence of http://xkcd.com/386/ "someone is wrong on the internet".)

Best is subjective. Who is it best for, the lazy individual posing the question(incorrect statement) or for people who will reply? In addition how many more questions can the person ask like this before everyone ignores them?

the best way to get an answer on the internet, as everywhere else, is to ask a very clear question to the right interlocutor.

in other words, you need to do enough work on your own to figure out what you need to know, after which you'll find that (as long as you have a decent command of the language you're expressing yourself in and the general terms in the field of inquiry) it's not difficult to get good answers.

Dorian Nakamoto is Satoshi: This is an attempt by him to throw people off his scent, which would be foolish and desperate since chances are the millions of eyes focused on him will find more concrete proof than the Newsweek article, rendering the posting moot. Almost guaranteeing the drama will continue, if not whip it into a larger frenzy.

Dorian Nakamoto isn't Satoshi: An attempt to absolve someone of harassment. Noble, but not wise, since now he will have to continue to disprove serious accusations of his identity, or else innocent people will be harmed again. And the corollary, if he does not publish a refutation people will assume it's tacit agreement. People will continue looking for him.

It's late and I know I didn't think of everything, but I can't see this being a winning move by Satoshi in any scenario.

My personal theory? Dorian was a member of a crypto group that eventually gave birth to Bitcoin, but he was never part of the implementation. Maybe he thought of the original math/idea, so they named their pseudonym after him in his honor? Probably not true, but fun!

(ignore the signature from today, anyone could have done that in the brouhaha following today's disclosures)

Based on that, it looks pretty clear to me that Dorian Nakamoto decided to "latch on" to the Satoshi Nakamoto founder's myth, either as a way to boost his reputation/ego or as some practical joke (or both!).

That or Satoshi the Founder is trolling us all.

1. Originally noted a few moments ago by mpfrank on bitcointalk.

EDIT: Duh, it's fake. Obviously, it's possible to set your system clock back. I was so intrigued I wasn't thinking clearly, even after I pointed this very "attack" out during the Ed Snowden GPG affair. Sorry folks, maybe if it were timestamped in the blockchain.

If I were Satoshi (Dorian) Nakamoto and I wanted people to leave me alone, I might post as Satoshi and tell people I'm not Dorian.

If I'm Satoshi Nakamoto and I'm not Dorian, I might post as Satoshi to try to get people to leave Dorian alone. But that's a pretty feeble attempt. If he cared enough to get people to leave Dorian alone, you'd think he'd come up with something that provided a bit more proof. Otherwise, why bother breaking silence?

I'm confused by all this. I find the drama interesting, but Newsweek quoted this:

"I am no longer involved in that and I cannot discuss it," he says, dismissing all further queries with a swat of his left hand. "It's been turned over to other people. They are in charge of it now. I no longer have any connection."

Did the reporter really lie about this statement? That seems like really a massive stretch, to me.

Satoshi has created a new currency, a cryptocurrency. The currency has become more popular and valuable than Satoshi could have imagined. Satoshi now holds almost $1 Billion of his new currency. Due to the currencies psuedo-anonimity, low liquidity and it's value in illicit trade it is far too dangerous for Satoshi to cash in and reveal himself...

Of course, this isn't necessarily proof that Dorian isn't Satoshi. (I personally don't think that he is, but I guess we'll see.) Regardless, it sure will be nice when the media stops hounding this guy. Especially if he really isn't Satoshi. This could have the potential of completely ruining somebody's life, particularly one who seems rather private like Dorian.

Should we change our beliefs based on this post? Based on Dorian's denials today, isn't this post just as likely in the world in which Dorian is Satoshi and the world in which Dorian is not? I don't have a strong belief one way or the other, but this doesn't seem like good evidence.

You can never hope to discover the truth behind the fiction of Satoshi. The closer you think you get, the further fiction will hide the truth. Indisputable evidence is nice, sure - but knowing is not being[0].

The man is a legend for a reason and will remain that way, regardless of anyone who claims or is proven to be, or not be him.

The "real" Satoshi Nakamoto coming forward after all this time just to say he isn't Dorian Nakamoto seems kind of suspicious to me. It's not like Dorian Nakamoto is the first person to have been accused of being Satoshi.

When shown the original bitcoin proposal that Newsweek linked to in its story, Nakamoto said he didn't write it, and said the email address in the document wasn't his.

"Peer-to-peer can be anything," he said. "That's just a matter of address. What the hell? It doesn't make sense to me."

Asked if he was technically able to come up with the idea for bitcoin, Nakamoto responded: "Capability? Yes, but any programmer could do that."

For someone who only recently heard about BitCoin this seems like an odd response. It almost seems like if you made a factually incorrect technical statement about BitCoin in his presence -- he might correct you.

If Satoshi disappeared for as long as he did, would he really come back for THIS? It seems like a waste of time and it's awfully risky. I don't believe Dorian is Satoshi and I don't believe that the response from "Satoshi" is Satoshi. Or perhaps Satoshi isn't as intelligent as he is made out to be and this screw up will in turn screw him, just like the other internet recluses that have fallen?

One thing is for certain, if Dorian is the real Satoshi, the Feds - NSA and FBI - are all over him, and they will know for certain whether he is the Satoshi (if they didn't know a long time ago). If Dorian = Satoshi, he's going to be forced in front of the NSA, and interrogated about bitcoin, he will have no choice in the matter (it won't matter if he's no longer involved, and it won't matter if there are no weaknesses to give the Feds, they will still do it).

It would seem odd for someone who's otherwise been so careful about security to use his or her real name. It would seem less odd for them to flick through the phone book and choose someone else's name though (or pick it off a grave stone, randomly mash forename and surname together - whatever.) Sharing the name is weak evidence at best. With so many people, odds are if you pick something vaguely okay someone in the world will have it too.

I think the most compelling evidence that Dorian is Satoshi is described here: https://news.ycombinator.com/item?id=7354326[EDIT: if an eyewitness account is compelling enough for you, this is evidence. it is to me, apparently not to some.]

Dorian (as recognized by the retailer) bought a crepe from one of the first retailers to accept bitcoins, the 1st transaction of the crepe retailer's bitcoin address at 1KfQKmME7bQm5AesPiizWk6h3JPUekwoBC for 2.2 bitcoins on July 17 2011, can check the Crepe twitter feed: twitter.com/Ocrepes - "The award for being the first customer who bought crepes for @bitcoins goes to ... anonymous (the winner refused to reveal identity)"

Tracing those addresses/transactions back leads to large volume addresses.[EDIT: 432,000 coins]

Without realizing it, I've been doing the same "CD trick". I play the Monstercat album mixes (https://www.youtube.com/user/MonstercatMedia - dubstep, which , regardless of its musical merits, I find conducive to focusing and not trailing off) and see how many I go through in the day. I also like albums because they're about an hour long, which I use as one-hour long pomodoro timers. 20 minutes is just way too short for me to truly focus.

I also really like the attitude that if you're not touching code, you're not doing real work. Sure, project managers etc. will say that your job is not solely to write code, and that responding to emails, participating in meetings with your teammates, etc. are as much part of your job. But I like the simplicity of "if you're not writing code, you're fucking off" and how easy it makes it to answer the question "Did I work today?".

The other points are spot on. I suspect a lot of engineers are of the ADHD-type personality (the fact that "yak shaving" is a thing in our jargon is a good sign of that IMO), and a key part of addressing it is to learn how to spot when you go off track (zoning off and going on Twitter when the bug is getting a bit too hard to track down, stopping what you're doing because a random question popped into your head and you just have to read the related Wikipedia article, etc.) so that you can correct yourself. Don't feel bad about it- just learn how to catch it early, and stop doing it.

I heard a talk a while back where the speaker was driving home the fact that we need to get used to the fact that we should do things regardless of whether we feel like doing them or not. It sounds super dumb and like the ultimate first world problem, but thinking about it hard made me realized how skewed my perspective was. I feel like our culture at large really leads us to believe that we should only do things we like and enjoy; "I don't feel like doing it" is definitely a sentence I hear regularly among my peers.

Finally, surrounding yourself with smart, hard working people is the ultimate productivity hack to me. In college, the quality of my work changed drastically depending on whether I sat at the front of the class with the math nerds or at the back of the class with the anime nerds. I realized that while I can be self-driven when it comes to things that really matter to me, a lot of the time I will follow the general tendencies of whatever group I am in. I suspect this is why all the smart, talented people in the industry are friends in some capacity - because they recognize how tremendously powerful it is to be in an environment where the average is very, very high. This leads to situations where you have companies who seem to be always in the spotlight, always have the best people, etc. (i.e. Valve, Oculus, iD Software, to stay in the gaming register that the article has), and then the other 99%.

If you feel like your workplace isn't encouraging you to be the best you can be on that front, that's an extremely compelling reason to find another place.

When I hear people saying, "I can do what takes someone else a day to do..." they're not thinking that their coworker is also doing it in an hour or two, and wasting the rest of the day. So everyone thinks they're more productive than the next person.

And I'll look at that code and go, "You have no tests. You didn't think about these three things. There are two bugs waiting to happen. This code is messy and is going to be hard to change later. It took you two hours because you didn't do the other six hours of work required to get it really done."

This is why I believe pair programming ends up not being a waste. Perhaps for the most disciplined programmers, they can go 8 hours without stopping, but most people don't.

It's much harder to slack when you're pairing. However, it's also much slower when you pair because you keep thinking of things to check, you write more test code, you write more robust code because two people are trying to attack it rather than one.

Good post, I just have a little problem with one advice: the "keep at it until you finish it" part. A lot of times, I set my mind to finishing something before goind home. It often ends up with me scratching my head until midnight, going to bed frustrated, and waking up with an obvious solution in my head. That's where I feel Rich Hickey's Hammock-driven development is a better way of thinking about productivity.

This guy is very right. I wrote myself a simple time tracker (I tried a bunch of other ones, but I was unhappy), to track my daily activities. Like John Carmack, I'd turn the timer off any second I'm not spending doing real work, even if I go to the bathroom.

I found out that during a "8-10 hours workday" I'm actually working, as in coding, getting shit done - maybe for about 2, tops 3 hours...

That's not just an interesting observation, I was FUCKING HORRIFIED. I'm pissing my limited lifetime away!

The first step is awareness. Always. Then comes improvement. I kept tracking myself, now being very aware of what interrupts my work, and limiting my distractions. I have yet to claim 8-10 hours of solid work in a day, but it's getting better.

Being productive is pure execution of an implementation or an idea. Anything else like planning, thinking through your problem is not deliverable or visible in the final output. Yet these things are necessary. Deciding when and how much to plan and think about your problem becomes a strategic decision. You want to spend the minimum effort required for the most effective solution within your time frame.

I'm almost certain that I have an undiagnosed ADHD, but living in London it is something that is not recognized. Whether this condition exists or not, my attention span is well below that of my peers at work. This lead me to actually find out what the optimal period of time I can concentrate doing something without my mind wandering. I took a timer and over a period of a week I experimented with blocks of time, starting from 25 minutes , I gradually reduced the block by 5 minutes until I found that sweet spot. I expected it'd be about 15 minutes, but to my dismay it was only 5. WTH!

I accepted the results and started each task with the expectation that any task I complete should finish in less than 5 minutes. If it doesn't then I'm either being inefficient because I'm

1) disorganised - spending time looking for artefacts required to do the piece of work2) unskilled - spending most of that 5 minutes not executing but thinking about how to do it.

At the end of the box of time. I'd document what category my inability to complete the task fell into and then I'd note it somewhere so that at the end of the day I could either spend time getting that area more organized, looking for opportunity for automation or learning so that I become a bit more skilled. Dealing with my disarray and unskilledness at the end of the day helped me work smarter. Intraday, I'd not have the time to do this, this is where the grind comes in, where you plough through a piece of work knowing that it might not be the best way of doing it but you need get it out the door. The time management system is unnaturally granular. You can quickly put a stop to an unproductive avenues. This has gradually made me a more organized and skilled programmer over the last 3 months. More importantly more productive. It's also given me detailed metrics on my productivity. I can measure my level of distraction, unpreparedness and so on. The odd effect of all of this is that I can work much longer hours without moving from my seat ( I do force myself to get up for breaks)

All this is possible because of where technology is now, without it the process is far too granular and unwieldly to be practical.

From my experience I agree with what the author said in points 7,8. This is vital.

* Haven't an objective productivity metric - how do you measure your productivity. * Accept that the grind part of the job

I think everything else can either fall into the categorys of managing procrastination, motivation and being a bit more organized which can vary widely depending on individual.

I believe I've seen this article posted before, possibly on HN. In any case, I re-read it today and all the lessons in it are applicable to me - yet again. Though I've made improvements in many areas, I have also fallen deeper into some of these pitfalls.

I was once obsessed with productivity, and for me it came down to an attitude of 'just do it'. This was filtered out into various micro-level attitudes and behaviours, many of which are discussed in this article... However, I now realize that going back to University sapped me of all this yet again: I was in the world where smart guys reign supreme. This is the reason why the smart guy pitfalls exist at all: there are artificial worlds where we can somehow glide by just by being smart.

The real world is real, and it takes real work to stay on top of your shit.

Some things I used to do that I will start doing much more again:

- Write everything down - Make todo lists - Try to actually measure productivity (http://www.rescuetime.com) - Be realistic and conservative with estimates of how long things can take. --Not really knowing how long something will take should be scary. --Do a quantifiable percent of a task, time it, extrapolate. - Stop borrowing time (and money).

I think I'm a pretty hard worker, I definitely feel the need to be productive, but every few weeks or so I have a moment where I say to myself, "You know, its just a friggin computer program, who cares?" And then for a day or two I coast a little bit, and try to get away from my laptop as much as possible. I guess this means I'll never be as good as John Carmack. Oh well, its worth it for me if I get to slow down and enjoy life every once in a while.

Definitely one of the better articles I've read on productivity that can be applied in many more areas than just programming.

I've found my own sense of entitlement and superiority inflated then promptly deflated by smarter and more productive people; I think the hardest and most important part of the experience though (that the author touched on) is to move through the feelings of depression or unworthiness when you are deflated into an appreciation for what you can become and allow those people to inspire you to something better.

I've accepted that I'm not a bad ass and I'm hyper aware now of what I want to improve upon in knowledge, craft, and self-efficacy (focusing and applying myself).

And remember, writing regular expressions can be very difficult, reasoning about your regular expressions can be even more so, defining your problem can be the most difficult of all. Think before you regex.

The tutorial I learned regular expressions from, which is longer but more detailed than this one, was http://www.regular-expressions.info/. Its free, thorough and well-organized. Its only flaw is that its section on support in various languages is out of date. It was written to sell a Windows-only regex tool, but its very non-pushy with the advertisements.

Handy. This does gloss over some of the notable differences between implementations (not everything has non-greedy matches or identical {m,n} or {m,} syntax), but it's still by far the best tutorial introduction I've seen for regular expressions.

The time I finally _really_ learned regex was the time I _really_ needed them for a project that was beyond something I could stack overflow. I think having some problem in front of oneself, and testing over a ton of input data is a great way to learn.

I find regular expressions intensely annoying. I come across situations where they are a perfect fit, but, as I've never got around to learning the syntax I spend ages looking for a perfect, or sometimes not so perfect, answer on SO that gives me the arrangement. More often than not, in order to tweak the answer to make it fit I find myself in the position of having to learn the syntax in order to do so. Catch 22.

Why is it that I find it much easier to just build a lexer/parser than I do using regex? Something about my brain, every time I see them I usually do everything in my power to not have to try to understand them.

On the other hand, I've worked with some regex ninjas who appear to just innately get it.

matches entire string while article says that ? should force it to match as little as possible. (All "test1", test2 and "test3" are highlighted as red) It works correctly http://regexpal.com/. What am I doing wrong?

This is without-a-doubt, the most useful and coolest thing I've seen hit the Hacker News homepage this year. Seriously, this is amazing. It works really well and is easy to use as well, you could be onto something here. It's seeing people create things like this that motivate me more than any, "Why you should switch to Google Go" article ever could.

You are amazing, "A designer who teach himself to code by making beautiful and awesome apps" best post of the day. But yeah the code is ... but the UI and the concept itself is beautiful ;). Keep improving and good luck.

Amazing but it seems that there is an obvious bug in handling scroll events.

I am using Magic Track Pad in Safari so there is no visible drag-able scroll bar in the attribute panel. When I scroll the attribute panel, the scroll events just escaped the attribute panel and cause the glyph to scale. I must concede that the scaling is smooth and responsive, rendering this behavior quite funny but delightful.

Wow. As an typography enthusiast who's miserable with bezier curves, I've never seen something this straightforward. I love the coordinates it gives you on handles and points. Being able to easily work from metrics forward is a great touch as well.

I gave up on trying to learn with Fontlab's TypeTool because nothing was nearly as clear or usable as this.

This is the perfect entrance to type design, and I see myself wasting a lot of my time with this in the coming weeks.

Is there a method to import a TTF or OTF font to start from? I've done a lot of work on a personal font I use, but I'd love to see how it plays out in Glyphr. Recreating it from scratch seems like a big ordeal.

Hello, Hacker News!I've been seeing a huge response since we released Glyphr Beta 3 earlier this week - it's been very exciting! I'm trying to answer all the questions i'm receiving. Until then, please play with the app and let me know if there is any feedback. Thanks!