Isn't this the point at which Cloudflare is supposed to gain a handful of PR points for putting him back online, pro bono, and then doing a write up on how effortlessly they handled the bandwidth with eBPF?

Here's a "philosophical" question with regards to the internet, and perhaps even it's future. Given that a currently anonymous attacker, and likely not a "state" player (i.e. not a governmental entity with almost unlimited resources) has managed to DDoS a single website, does this portend that unless there are significant changes to the way the internet infrastructure works, we are seeing the demise of the WWW?

Kind of like a reverse wild-wild-west evolution, where the previously carefully cultivated academic and company site presence, gradually degenerates into misclick-hell? And the non-technical, non-IT savvy masses, in a bid to escape this all, end up in a facebook-style future where media is curated and presented for consumption (or perhaps in future, facebook-type entities end up with their own wild-wild-west hell)?

I have a strange feeling that we are seeing the decline of a city/civilisation; once you used to feel safe walking out at night, knew everybody in the neighbourhood, could leave your doors unlocked... and now, you don't dare to go down the lane to the left in case you pick up a nasty virus, and if you hear a knock on the door at night/email from DHL, you don't dare to even look through the peephole/preview the JPG!

I would like to see stats from Tier1/Tier2/IX for that.Krebs claims it's 665Gbit/s https://twitter.com/briankrebs/status/778404352285405188 Such attack must be visible in many places, however not a single major ISP reported that in mailing list. Previous smaller attacks were reported 'slowing down' some regional ISPs. Perhaps ISPs got better.

The first thing a lot of people are thinking (and saying) is "switch to Cloudflare". But there's another name I think needs to be said - OVH. OVH can withstand a Tbps scale attack as far as I know, and it provides this to pretty much anyone. They have a pretty good interface and some of their plans are extremely cheap. They're also great at standing up for free speech, which I really appreciate.

I tried to get to an article on Krebs' site from a Bruce Schneier blog post, and couldn't, then bumped into this post in HN.

It's a pity Akamai booted him off; on the one hand, I can understand that it would significantly impact on their SLAs to other customers, but on the other hand it's a shame they don't have a lower impact network to re-host him on, and use this as a learning lesson on how to better mitigate such DDoSs...

I'd love to learn more about these botnets. I wonder about things like What's the average time that a compromised computer stays in this net. What is the typical computer (grandmas old PC running XP). Do the ISPs ever get involved to kill bots running on their networks?

Wow, I figured that everyone that had hired vDOS would be irritated but that is pretty impressive. Still it says a lot for how effective he has been at rooting out this stuff, not like the TierN infrastructure folks have managed to track this stuff down with their resources.

Something about the platform-centric world we're in now is that this sort of attack doesn't have the blocking power it once did: you can mirror your content on Twitter, FB, G+, etc. and cross-link so people can still read your stuff. This makes the "denial" part pretty watered down; it's a wonder people even bother with these sorts of attacks anymore for non-services (i.e., for regular media material like text, photos, etc.)

Of course, maybe the goal is to deny someone ad revenue, but that seems awfully low-status for such a high-profile attack: "Yeah, we really got 'em! Denied 'em AD REVENUE for a whole week!"

Brian Krebs is a hero. Are Akamai executives cowards for dumping him? I'd like to add that law enforcement are heroes.

And it's honorable he wants to meet Fly in person, recognizing him as a human being. I haven't read it yet but I'm assuming the reference to 12-step hints that Fly's having some post alcohol binge regrets.

I'm sure alcohol makes it easier to hurt other human beings, which is why violent people are often drunk. I'd be ashamed of myself if I woke up realizing that I'd spent my life actively trying to harm other human beings for money, feeling no remorse until Karma (here defined as law enforcement officials) finally caught up with me.

I'm wondering if the rising scale of these attacks & the seeming ease with which sites can be taken down will ultimately result in an "authenticated" internet - ie. you can't even connect without identity verification.

We already see publishing through FB Instant Articles etc. moving in that land on top of the current internet, to combat these types of firehose attacks, the only solution may be to take authentication one level deeper into the connection level.

That of course sounds good to security agencies as that's the end of anonymity online.

There are a number of factors that go into play (did the site use custom SSL, what edge locations were they providing caching in, etc), but had Kreb been a normal paying customer, this could easily have been a over a million dollar bill (if it was sustained long enough to alter his 95th percentile bracket) in the cheapest case. If things like custom SSL are in the mix (which Akamai charges absurdly high prices for), or lots of traffic from more expensive POPs, or lack of already having pricing commiserate with high volume traffic commitments, the bill could've been 5-10x that amount or more.

It's kind of stupid to me that the massive and advanced cdn of akamai protect something as non-important as a blog against such a major ddos attack. If they were doing it pro-bono wouldn't the prudent action be to mitigate ddos's until a certain treshold and then actually assess the value of what you are protecting? A good lesson to have learned, I believe.

But no, they'll drop this client which had to have continually given good referrals.

Unbelievable, they enjoyed year of free publicity from association with him, and this is how they repay him. Its bad enough that they couldn't handle the attack, despite all the bragging about their multi-Tbps capacity...

Brian Krebs' wasn't a paying customer right? Akamai provided the service pro-bono. Perfectly acceptable for them to suspend service if it becomes more than trivial in terms of cost or it puts their paying customers at risk.

I think its time for some serious financial incentives for ISPs to start getting serious about routing (or rather not routing) garbage. Financial fines for every DOS originating from your AS, or blacklisting if you are a repeated offender.

I can't wait to use some of the new features in our production apps. Typescript is/was my bridge into javascript development, because IMHO javascript was a broken language for a long time, and I am not sure if I could have ever done as much as I have without its existence.

Non-nullable types, tagged union types, and easy declaration file acquisition are definitely the biggest wins for me with this release.

This isn't really the place for it but I really wish that both Webstorm and Resharper used the actual typescript compiler for its tooling (like vscode) vs handrolling their own. Now I have to wait until Webstorm 2016.3 to see the full benefit of 2.0, rather than getting it for free by just updating typescript. Not to mention the obscene number of typescript edge case inconsistencies in the warnings, errors, and refactorings.

Typescript is such a neat project. The js ecosystem is vast and diverse and the typescript team has the unique job of figuring out how to make common dynamic patterns type-safe and convenient. Like... that's so cool. Every little pattern like its own type system puzzle, and there's no _avoiding_ the issue like a ground-up language can do, because their job is literally to type the JavaScript we write today.

Also, how much money is MS pumping into TS? A lot of OSS has one or two super-contributors that carry the project on their backs, but typescript has a small army of smart people with significant contributions.

Control-flow analysis? I think that was Flow's differentiating feature.

Obviously, there are still differentiators between the projects (like TypeScript including a known set of transpilers vs. Flow delegating to Babel), but I'm curious to know if they are converging on their core feature (e.g. how to do type-checking/static analysis).

Man, I feel like the only one here who doesn't really like static types. I like dynamic typing just fine (it's crazy, I know: it must be the lisp influence). And if I want static typing, TS feels a bit intrusive. Flow is much better in this respect.

I also don't think JS is the root of all evil, and I use Emacs rather than an IDE (although we do have really good integration with regular JS, in the form of the famous js2-mode, and flycheck's linter integration). I, mean, do you really need your IDE checking your types as you type? It's not that slow, and us Emacs users have M-x compile, so we can run our code, and than jump back to the problematic line when an error occurs, and I know IDEs have similar functionality.

Don't get me wrong: static typing can be good at times, and option static typing and compile-time type analysis are useful tools, and I'm glad TS, Flow, and the like exist. But I always see a flock of comments saying that they couldn't possible live without static types, and thanking TS for taking them out of the hell of dynamism, and wishing there was something similar in Ruby/Python/whatever.

I really wish that MS would release typescript as a collection of plugins for babel that would handle only one thing at a time (eg, the type system). Having my production build, es6 transpiler, type system, JSX compiler and so on (including a bunch of features I would rather didn't exist at all) all in one package feels like a failure of separation of concerns.

I understand that people find Babel's plugin ecosystem confusing and intimidating (it is), but I don't think a separate monolithic typescript that reimplements popular babel functionality is the answer.

> In TypeScript 2.0, null and undefined have their own types which allows developers to explicitly express when null/undefined values are acceptable. Now, when something can be either a number or null, you can describe it with the union type number | null (which reads as number or null).

Great news, but I suspect it's going to be pretty difficult to migrate large codebases to account for this properly, even with the help of `--strictNullChecks`. Sounds like days worth of tedious work analyzing every function.

Huge TypeScript fan here - been using it since its 0.8x days. And I'm very interested in the new --strictNullChecks compiler flag. But I'm trying to implement that on our current codebase, and I'm coming to the conclusion that it's still a bit premature. There are a lot of very common and harmless JS (and TS) patterns which this breaks, and for which it's been difficult (for me at least) to find a workaround.

Turning on --strictNullChecks flagged about 600+ compiler errors in our 10Kloc codebase. I've addressed about half of those so far, and I can't say that any of them have actually been a real bug that I'm glad got caught. On the contrary, because of the weird hoops it makes you jump through (e.g., encodeAsUriComponent(url || '')), I'd say that our codebase feels even less clean.

Does Typescript have a facility to support partial function application?

Say I have a function of arity 4, and want to bind / partially apply (some might say "inject") 2 arguments to it to create a function of arity 2, can TS infer the types of the remaining arguments, or, that the result is a function at all???

I use partial function application MUCH more than classes in the JS code that I write. There just seems to be less to need all that "taxonomy" related refactoring.

"Stop Writing Classes", "Executioner" in "The Kingdom of Nouns" (not!), and all that sort of thing :-)

I've got to find the time to play and get typescript set up figured out. I've had several false starts, always running into duplicate type definition issues and similar problems - looking forward to see if the new @types from npm help things.

typescript is a nicely conservative set of extensions to javascript, but if you're willing to venture a bit further afield, redhat's ceylon [https://ceylon-lang.org/] is well worth a look. it has an excellent type system, and seems to be under active development, particularly in terms of improving the runtime and code generation.

i played with it for a bit before deciding that for my personal stuff i prefer more strongly ML-based languages, but if i ever have to develop and maintain a large team project i'll definitely give ceylon a very serious look.

I gave TypeScript a try but honestly thought it was more trouble than it's worth in most javascript applications/libraries. Maybe that's just the lazy person inside me, or maybe my projects aren't big enough to make use of it's features.

TypeScript is so strange for me. Each time I read something about it, I want to give it another try... then I read some codes and I just can't go further. I like JavaScript as it is, without types on every line of code.

Unlike most people on HN, I like JavaScript. I build web app since 2011. I liked working with jQuery, then Backbone and Grunt, then Angular and Gulp. Now I'm working with React, Webpack and Babel (ES6/ES7) and writing web apps has never been so much pleasure. JavaScript in 2016 is really fine for me. And the common point in my JS experience from 2011 to 2016 is that dynamic typing has never been a problem. I also worked with strongly typed languages for years like Java or C# and I still prefer the flexibility of JavaScript.

So it's strange because I admire TypeScript. The work accomplished by its team is really amazing. And it's so nice to see a single library reconcile developers with JavaScript. But in the other hand, I prefer keeping my JS without types because it just works fine for me and the teams with who I worked.

Meh, yet another grep tool.... wait, by burntsushi! Whenever I hear of someone wanting to improve grep I think of the classic ridiculous fish piece[0]. But when I saw that this one was by the author of rust's regex tools, which I know from a previous post on here, are quite sophisticated, I perked up.

Also, the tool aside, this blog post should be held up as the gold standard of what gets posted to hacker news: detailed, technical, interesting.

I'm the author of ag. That was a really good comparison of the different code searching tools. The author did a great job of showing how each tool misbehaved or performed poorly in certain circumstances. He's also totally right about defaults mattering.

It looks like ripgrep gets most of its speedup on ag by:

1. Only supporting DFA-able Rust regexes. I'd love to use a lighter-weight regex library in ag, but users are accustomed to full PCRE support. Switching would cause me to receive a lot of angry emails. Maybe I'll do it anyway. PCRE has some annoying limitations. (For example, it can only search up to 2GB at a time.)

2. Not counting line numbers by default. The blog post addresses this, but I think results without line numbers are far less useful; so much so that I've traded away performance in ag. (Note that even if you tell ag not to print line numbers, it still wastes time counting them. The printing code is the result of me merging a lot of PRs that I really shouldn't have.)

3. Not using mmap(). This is a big one, and I'm not sure what the deal is here. I just added a --nommap option to ag in master.[1] It's a naive implementation, but it benchmarks comparably to the default mmap() behavior. I'm really hoping there's a flag I can pass to mmap() or madvise() that says, "Don't worry about all that synchronization stuff. I just want to read these bytes sequentially. I'm OK with undefined behavior if something else changes the file while I'm reading it."

The author also points out correctness issues with ag. Ag doesn't fully support .gitiginore. It doesn't support unicode. Inverse matching (-v) can be crazy slow. These shortcomings are mostly because I originally wrote ag for myself. If I didn't use certain gitignore rules or non-ASCII encodings, I didn't write the code to support them.

Some expectation management: If you try out ripgrep, don't get your hopes up. Unless you're searching some really big codebases, you won't notice the speed difference. What you will notice, however, are the feature differences. Take a look at https://github.com/BurntSushi/ripgrep/issues to get a taste of what's missing or broken. It will be some time before all those little details are ironed-out.

In contrast, GNU grep uses libcs memchr, which is standard C code with no explicit use of SIMD instructions. However, that C code will be autovectorized to use xmm registers and SIMD instructions, which are half the size of ymm registers.

I wish more people actually took steps to optimize disk io though; my current source tree may be in cache, but my logs certainly aren't. Nor are my /usr/share/docs/, /usr/includes/, or my old projects.

Chris Mason of btrfs fame did some proof of concept work for walking and reading trees in on-disk order, showing some pretty spectacular potential gains: https://oss.oracle.com/~mason/acp/

Nice! Lightgrep[1] uses libicu et al to look up code points for a user-specified encoding and encode them as bytes, then just searches for the bytes. Since ripgrep is presumably looking just for bytes, too, and compiling UTF-8 multibyte code points to a sequence of bytes, perhaps you can do likewise with ICU and support other encodings. ICU is a bear to build against when cross-compiling, but it knows hundreds of encodings, all of the proper code point names, character classes, named properties, etc., and the surface area of its API that's required for such usage is still pretty small.

It would be interesting to benchmark how much mmap hurts when operating in a non-parallel mode.

I think a lot of the residual love for mmap is because it actually did give decent results back when single core machines were the norm. However, once your program becomes multithreaded it imposes a lot of hidden synchronization costs, especially on munmap().

The fastest option might well be to use mmap sometimes but have a collection of single-thread processes instead of a single multi-threaded one so that their VM maps aren't shared. However, this significantly complicates the work-sharing and output-merging stages. If you want to keep all the benefits you'd need a shared-memory area and do manual allocation inside it for all common data which would be a lot of work.

It might also be that mmap is a loss these days even for single-threaded... I don't know.

Side note: when I last looked at this problem (on Solaris, 20ish years ago) one trick I used when mmap'ing was to skip the "madvise(MADV_SEQUENTIAL)" if the file size was below some threshold. If the file was small enough to be completely be prefetched from disk it had no effect and was just a wasted syscall. On larger files it seemed to help, though.

When I use grep (which is fairly regularly), the bottleneck is nearly always the disk or the network (in case of NFS/SMB volumes).

Just out of curiosity, what kind of use case makes grep and prospective replacements scream? The most "hardcore" I got with grep was digging through a few gigabytes of ShamePoint logs looking for those correlation IDs, and that apparently was completely I/O-bound, the CPUs on that machine stayed nearly idle.

I am not sure how excited I am ... I readily accept this to be faster than ag -- but ag already scans 5M lines in a second for a string literal on my machine. Not having to switch tools when I need a recursive regexp is win enough to tolerate a potential .4s vs .32s second everyday search.

Great tool. Does there exist a faster implementation of sort as well? I once implemented quicksort in C and it was faster than Unix sort by a lot, I mean, seconds instead of minutes for 1 million lines of text.

Anyone have any suggestions regarding how to best use Ripgrep within Vim? Specifically, how best to use it to recursively search the current directory (or specified directory) and have the results appear in a quickfix window that allows for easily opening the file(s) that contain the searched term.

I'm never sure whether or not I should adopt these fancy new command line tools that come out. I get them on my muscle memory and then all of a sudden I ssh into a machine that doesn't have any of these and I'm screwed...

I knew it. The name is absolutely ironic. I cannot just drop-it-in and make all my scripts and whatever scripts I download work immediately faster (nor is it compatible with my shell typing reflexes). New, shiny, fast tool, doomed from birth.

Popular Mechanics found different results when they did a similar study [0].

>"One disheartening result was that our package received more abuse when marked "Fragile" or "This Side Up." The carriers flipped the package more, and it registered above-average acceleration spikes during trips for which we requested careful treatment."

Senders should add a small $1 "black box" recording acceleration data, and shipping companies should be able to query for a certain package and a certain timestamp, which employee was accountable at that moment.

Then when you receive a broken package, the black box tells you the timestamp when it was thrown to the ground, you tell that to the shipping company, which then finds the employee at fault and gives him/her a warning/sacks him/her.

In Korea, the magic phrase is 'contains kimchi' and you are guaranteed of safe delivery. All hell break loose when kimchi leaks; boxes get wet and smelly, kimchi stains don't come off easy so delivery people take extra measures to prevent it.

The details of shipping are quite interesting. Martin Guitars (a well know brand) removes absolutely every reference to their brand or the fact that they are guitars or musical instruments in the external packaging, while keeping an internal box with their logo, etc... a box within a box

They started doing so after having issues with "disappearing" guitars in transit (though probably at the moment with all the new tracking systems this is more complicated nowadays)

Their packaging is also quite protective, as you can imagine with a musical instrument...

When I ordered my bike from UK (Evans cycles is awesome), it shipped via DHL. They're pretty high on the meh scale. The box had double corregated cardboard and the bike was packed for war. I'm sure it wasn't handled gently. That seems like the expectation with shipping. Super cool this hack is! Maybe one day they'll try a picture of a glass chandailer too.

This all said, 90% of the boxes I get from Amazon via UPS are in perfect condition - it's remarkable how well they handle small packages.

There's a national geographic show called "ultimate factories" that has an episode called "ups worldport". Super fascinating. I recommend it!

True, but they could reduce damage even more by putting a picture of a stained glass window and giant letters "HIGHLY FRAGILE DELICATE STAINED GLASS WINDOW! HANDLE WITH EXTREME CARE!!" on it. That would certainly reduce damages further.

The problem is that it isn't one (a TV). Why would someone feel mortified if they accidentally drop a packaged bicycle from 2-3 feet (typical carrying height) when a fully assembled bike can be dropped from 2-3 feet, and this is packaged, so it should be even safer. On the other hand no one would feel free to drop a packaged LCD TV from even half a foot because people know it includes a giant pane of essentially glass, and they know that there are limits to what packaging can do.

So, yeah, by failing to meet expectations when it comes to packaging a bicycle, they can reduce damages by writing on it that it's a TV instead. All right.

But isn't this still them not meeting expectations exactly? If they write on it that it's a delicate stained-glass window, that would still be not meeting expectations. If the handler is the one with unreasonable expectations or behavior (if 2-3 feet isn't a reasonable drop height and should be considered a failure), then maybe educate the handler with some writing or warnings on the packaging.

isn't the real issue here that handler's expectations of bike packaging does not meet bike packaging's characteristics? so, you could tackle it head-one by writing care instructions.

alternatively, the article says only 70-80% reduction in damages was achieved. Maybe by lying and saying it is delicate stained-glass, handle with extreme care, they could up that to 95% reducted. I guess I've just saved them 15% of their former damages (even higher percentage of their remaining damages) with this one neat trick.

Most things arrive fully assembled. With that TV you just plug it in and that is it. You don't have to adjust the HDMI sockets with a screwdriver or double check the earth lead is correctly bolted on. You don't have to get a spanner out to adjust that five degree tilt to one side in the base.

But with a bicycle, it is an entirely different story. The seat is not centered on the rails, nice and level. Much has to be assembled and that is understandable, however, the brakes and the gears rarely work as well as Shimano intended. The bike is part assembled and the consumer is left to do the rest. Rarely is the finished result as polished as the fit and finish that the TV arrives with.

If a bicycle manufacturer jost got that final assembly together so that only seat height adjustment was needed with nothing else needing a double check, then they might be able to sell to the end customer properly. As it is there is no quality in the final delivery, bikes sent to the customer will be far from expertly 'tuned'.

Why do the shippers care about breaking a TV? Presumably there are repercussions, such as an insurance plan. So why don't those repercussions just apply to bicycles? If they're fined for enough bikes being broken, they should probably learn that they need be more careful than they thought, right?

You'd think this would affect the stock price, but currently YHOO only trading down 8 cents (-0.18%). I honestly see this all the time. What sounds like really horrible news for a company, does not affect the price. Howerver, some random analyst or reporter who works at the Mercury Star Sun Inquirer writes a negative article or downgrade and the stock tanks. Doesn't make much sense.

"state sponsored actor". I wonder how they decided that. did the hackers plant a flag inside yahoo's data center? or is any attack originating from outside US now considered state sponsored? of course, we will never see any proof of this.

also, did it take them 2 years to discover this breach? that's bad. or, do they just announce it now? that's worse.

"The data stolen may have included names, email addresses, telephone numbers, dates of birth and hashed passwords but may not have included unprotected passwords, payment card data or bank account information, the company said."

What's the difference between "may have" and "may not have" in this context?

Moving email addresses out from one provider and creating another one is more difficult than moving phone numbers (in the latter case, number portability could help, if available).

What exactly can an average/common end user do for such incidents, even if it is to avoid them in the future? I use different passwords across accounts, with all of them being somewhat complex or very complex.

I have looked at a few different paid service providers before, but they're all very expensive. Expensive for me is anything that charges more than $20 per year, or worse, charges that amount or higher for every single email address/alias on a domain. My use of email for personal purposes is writing about a handful of emails in an entire year, but on the receiving side, I get a lot of emails - most of them somewhat commercial in nature (like online orders, bank statement notifications, marketing newsletters I've explicitly signed up for, etc.). I also have several email addresses, each one used for a different purpose and with some overlap across them.

It seems like web hosting has become extremely cheap over time whereas email hosting has stagnated on the price front for a long time.

One of the more convoluted announcements I've seen. I have to be aware that yahoo officially communicates via tumblr.com, check two different announcement pages which may not yet be up (converting time zones). When I clicked one of them I had to find the notice "in my region" which had only one option (not my region) and linked to another (non-yahoo?) site with an image of a document. I can't imagine all 500M users will jump through these hoops and remember when they last changed their password.

FWIW... I just logged in to my Yahoo Account and removed the security questions. Just to be sure. I had already changed my password a few months ago when first rumors of this came up. I'm pretty sure that the option to remove the security questions wasn't there back then.

Interestingly my account is not part of the compromise and my friends are; I can confirm this because they received a message about the compromise and I did not. . .I asked them how long they've had their accounts and they said for about a year; where as I have had my account for about 5 years. Interesting no?

I think my only concern is what data I had attached to my Yahoo account (for Flickr) which I think they required me to tie to a phone number. So I guess that means I can expect people trying to abuse that phone number as a point of identification in identity theft attempts. Oh joy.

Theres one thing I dont understand with this state sponsored actor. Say you are an oppressive regime and you target activists who use yahoo mail to publish your dirty laundry. Why on earth would you hack half a billion accounts just to get access to a few dozen ones? Doesnt make sense. You attract too much attention. A thing like that would never go unnoticed. If on the other hand youve found some exploit and target specific accounts which are numbered in the tens, say hundreds, you can easily get away with it.

BTW, I dont know if its coincidental but just yesterday I received a notification from Yahoo to disable access to Mail from third party apps.

Oddly, I changed the password for 2 Yahoo accounts only a month ago. I have to wonder if Yahoo filtered for people who recently changed passwords before designating me as a person who might be affected.

I have an account that was definitely compromised. I had completely forgotten this account existed and never used it to sign up for anything else. I was rather surprised when I realized someone had that email and password.

It seems bizarre that Yahoo would use a post on tumblr.com to make such an important announcement. From what I've seen Tumblr has become mostly a wasteland of worthless garbage in the past few years and no one takes it seriously any more. Isn't this the sort of thing that ought to be on the yahoo.com home page from a PR crisis management standpoint?

Medicine is probably the second best place after military where we can observe how greed and corruption are literally killing people.

I'm living in Russia and recently have been involved in medical devices market here. The local market for cardiology stents (little springs they insert non-surgically into your heart to remove artery clogging and prevent heart attack or stroke) has been long occupied by the three US companies. The Russian company I invested in, made their own stent design and launched a production factory in Western Siberia. Our prices are three to four times lower that prices for the same class of stents from the US competitors and the quality is the same or higher. We fought out 15% or the market for the last two years.

I have to say, that almost 99% of all stents in Russia are installed at the cost of the state medical insurance - every person in Russia is covered by this insurance, and that insurance is just sponsored by the state or local budget. The budget allocated to this kind of medical support is fixed, so if the yearly budget is 100M rubles (our local currency) and cost of a manipulation and a stent is 100K rubles then you can install stents in 1000 patients in one year. If the price goes down four-fold, then it will be 4000 patients. And this stent manipulation is a life saver in true sense of this word. So, basically with our stents we can save four times more people's lives, which on a scale of Russia would be tens of thousands of people.

Here enters the greed and corruption. One of the US companies approached one of the most powerful Russian oligarchs with good ties in the government. He lobbied a government decree stating that this US company will be single supplier for cardiology stents starting Jan, 2017. So, all hospitals and clinics are obliged to buy stents only from them, at the price they set. Tens of thousands of Russian people will die each year because of the greed and corruption - and we can't do much about it.

The Hackers groups are doing what they can to expose Mylan's (EpiPen makers) Greed which is laudable. What is really needed is also an explainer on why a bit of govt. leverage (socialism if you will) is good in Medicine Pricing as well.

Mylan is a really great user of buying legislation.They leveraged their 90% market ownership of the epinephrine auto-injector market such that

[1] It lobbied hard to ensure that all parents of school going kids (or tax payers) paid for EpiPens by making it into a bill that politicians could easily justify.

Once the bill passed and schools all over the country purchased these by the boatloads, then they just kept raising the price over and over and milking the profits.

When it got too much and they could not ignore the patient backlash they have turned again to purchasing legislation..

[2] Now they want to make it so that the patients do not see the copays - instead every one suffers by paying more for health insurance.

I feel like health is a degenerate case of free markets. In any free market, the price is set by the consumers assessing their utility for the goods or services purchased. In cases of pencils, productivity software, energy, raw materials, etc, consumers compare the methods of resolving the need, or at baseline the cost of not addressing the need.

In healthcare, there are lots of situations where the cost is X dollars vs literal death. Of course, death is not an acceptable alternative, so an acceptable X ends up being very, very high for the treatment. Most people would pay their life savings to treat themselves of any life-threatoning ailment.

I honestly believe that free markets setting prices is good for most industries, but I cannot see it working in situations where the benefit categorically supercedes any amount of money.

It seems like we need to either rethink IP law surrounding healthcare, or have a monopsony (single payer or something else) setting prices.

This is a hard thing for me to resolve, as somebody who normally likes a libertarian approach.

The title could be rephrased as "Cheap guys risk the lives of thousands of people by promising savings of a few bucks".

The problem is not with the "greedy corporations", but with the poorly dedigned legislation regarding intellectual property rights.

The state created the protectionist environment in which companies can become bullies and be sure that they won't be exposed to any economic competition.

Of course, the complete lack of IP laws would deter companies from investing in research, but the same effect have too strong IP laws. Why would a company risk their money and do research once they found a cash cow which can be milked for a long time, having the state guarantee it?

TIL that an "autoject" is an inexpensive self-injection tool commonly used by diabetics[1] that can be carried safely and used easily. It can be loaded with Insulin, or with any drug whatever. The OP article describes using it to inject epinephrine, stating that

> A 1mL vial of epinephrine costs about $2.50... Doses range from 0.01mL for babies, to 0.1mL for children, to 0.3mL for adults.

In other words, if your doctor will give you a prescription for the drug itself, you could assemble three epipen equivalents for less than $100.

It's awesome to see a 'hacker' building a $30 EpiPen. But looking only at the materials cost for a medical device ignores the millions (sometimes billions?) of dollars spent on R&D, IP licensing, and (perhaps most significantly) regulatory compliance.

The pricing system for devices and drugs is definitely screwed up in the US, but Mylan's 36% gross margin on the devices doesn't seem criminal.

Perhaps they're padding their cost numbers. And perhaps there are IP shenanigans at play that I'm not aware of. But one needs a thorough understanding of the total costs to invent, develop, achieve regulatory clearance for, and market a medical device in order to assess the morality of the pricing.

This doesn't feel like the right platform for DIY. When someone needs an EpiPen, it's because they might be dying. Presumably, a large and well-capitalized organization will have tested their device extensively and can offer better guarantees about it actually working (I should stress presumably). There are a lot of ways in which the hacker mindset can be beneficial to society, but this particular application feels like an ethical gray area.

So "Four Thieves Vinegar" says their DIY auto-injector works probably almost as well as the EpiPen. Sign me up! </s>

Are we really complaining about an "onerous regulatory process" for a device which untrained laymen need to be able to use in a high stress emergency situation?

I'd like to see Four Thieves Vinegar fund the necessary trials to prove their device is safe, gain FDA approval, bring the device to market, and defend themselves against the inevitable lawsuits, and then tell us how they can sell the device with less than 80% gross margins. The marginal cost of making one more pill or one more device is almost entirely irrelevant, and any article that tries to make a case for a medical product being overpriced based on COGS isn't worth reading IMO.

The price for EpiPens went up because no one else was able to make a competing product that didn't malfunction or deliver the wrong dose of epinephrine.

> Four Thieves Vinegar have created and uploaded the plans for the simple version, called the Epipencil. Also spring loaded, the parts are gathered over the counter. The epinephrine will still need to be acquired with a prescription.

This still involves an FDA-approved drug obtained through normal channels; the DIY part is the injector.

Creating DIY medical drugs would certainly be something to be concerned about, but I don't see the problem with DIY medical tools.

The hack here is simple: this group did not get FDA approval for their device. Greedy corporations have repeatedly tried to make money by competing with Mylan with cheaper Epipens, but they've been prohibited from doing so by the US government (not so in Europe, where the unfortunate Europeans suffer eight greedy corporations trying to drive prices down).

It may be better than having no epinephrine at hand. But other than that, there are a lot of problems: How sure are you that it will work when you need it? Can you fill the syringe cleanly enough? Will the epinephrine in the syringe degrade, or worse, develop a bacterial growth?

It may be a better Idea to look into a syringe+vial combination on hand, prescribed by a doctor. Less convenient, and you need to learn how to use it (and preferably teach those close to you), but this may be a whole lot safer. The downside of course is the problem of self-administration when in anaphylactic shock.

I've had to use an EpiPen twice in my life. Oh my gosh, the terror in your heart when you're self administering it is real. I will never forget the experience for the rest of my life. I don't want to trust some hack with no FDA approval in that moment.

I don't give a damn if the product is $50 or $500. I will buy it, it's saved my life many times. Its not awesome to see a hacker point out while the materials are cheap

This points out (among other things) that the design is patent protected and FDA rules make it difficult to come up with other designs that don't violate the patent. It is also mentioned that the devices need to go through long and expensive regulatory process.

Now granted Shkreli is a controversial figure, but basically drug companies are businesses, and if you sort of detach yourself and look from a business perspective and value-based pricing, Epipen competes with the ER, and $600 is a bargain vs an ER visit.

And of course his ultimate conclusion is that maybe life saving drugs are more like water and power than cell phones and wine? Maybe the government should get involved in making generic drugs available.

Somewhat tangential: I surmise that this title will be subject to editing by HN staff, but I think that "Hacker group creates $30 DIY Epipen to expose corporate greed and save lives" is an exemplary post title for HN and want to see more like it.

Just saw more Epipen Congressional testimony. The actual unit cost of the Epipen (whether branded or "generic") is around $67 USD. Assuming that this cost were not overly inflated beyond actual overhead and unit costs, in order to be sustainable, a reasonable retail price without distributors would be $134 USD... with distributors $200-238.

That said, the more downward pressure from competitors (commercial or nonprofit projects), the better for customers; especially where a monopoly existed, it's rational to for customers to band together and attack excessive hegemony.

Enteprising folks need to jump on this to sell this as a kit (w/ or w/o the medication).

"3.1.24 The health economics model assumes that people who receive adrenaline auto-injectors will be allocated two epinephrine pens (EpiPens) with an average shelf-life of 6 months. Each auto-inject EpiPen costs the NHS 28.77 (British national formulary 60). This equates to 115.08 per person per year."

$30 is way too much, production cost of EpiPen is probably in single-digit dollars, maybe even less. That's not the point, nobody thinks EpiPen costs $300 to produce.

The system it built in a very specific and deliberate way in the US - there are patented drugs that are expensive, by design, and the pharma is supposed to finance R&D and FDA testing and so on from that money, instead of financing it through taxation, or venture investing, or other means. Now, one can claim maybe Mylan is abusing the system and the money that were supposed to finance R&D are instead financing lavish salaries or whatever. And one can claim the system should not be built at all like this but should be built other way. Maybe.

But completely ignoring the whole design and saying "ah, we've discovered it costs $30!" is useless. Yes, it actually costs even less to manufacture, way less. It's obvious. The reason why Mylan charges more is not because it costs a lot to manufacture. The reason is because that's how patented drugs market in US works. If one wants to change it, it needs to be understood how it works. It's not corruption, it's the design of the system.

Ther's an abysmal difference between hacking something together an manufacturing a reliable product at scale that people can bet their lives on. Everything, from R&D to the cost of lawsuits, FDA trials and regulatory frameworks makes these comparisons dumb and ridiculous.

I've been manufacturing products for thirty years. It's never simple for good products, not even a cup of coffee at Starbucks.

I am a hacker at heart, and I believe there are definitely some shady dealings with government and industry lobbyists, however, I like to look at things on both sides, since there is always another side.

Truth is if it was more than one hacker in this collective making the 'Epipencil' they must have designed, procured materials, fabricated and did this all in less than an hour to say $30, and they would have had to do all of that in less than an hour to meet minimum wage requirements.

This does NOT speak to QA/QC, testing, insurance, FDA approval, legal costs or even their hacker lab overhead in equipment and energy to make one, let alone hundreds of thousands of these potentially life-saving products.

My guess is that the $150 per Epipen is close to what you need to fulfill all of the above and then some requirements. Far from the $300 or more in pen price hikes, so it was good they did this as an exercise for putting Mylan and government in the spotlight. Bravo, really!

My belief is that it is not solely big bad corporations, but big bad government AND big bad corporations. Just look at the moral integrity of our two current POTUS candidates.

I am trying to become more financially literate in my old age, and I am trying to teach my children likewise, since financial illiteracy is a deterrent to poor people improving their lives, or hackers making a worthwhile dollar in conjunction with learning and exploration.

I tell my kids to think twice when they reactively say or answer:

"ASAP" - when is that? Point to a date on the calendar;

"It will take 5 min." - It never takes just a minute or five;

"It only cost $8 for the materials." - How much is your time worth? Learning is a benefit that cannot be quantified too easily, but for other matters, you need to value your time.

The expected market response should have been a flood of alternatives at 1/100 or even 1/10 the price since the base ingredient costs pennies. But these 'ideal' market scenarios that are in public interest rarely come through.

What we often get instead are completely self serving and crafty efforts in collusion with 'ngos' and lobbyists to leech tax payer subsidies and 'force' it onto institutions via legislation.

This pattern is repeated so often and widely its predictable. Also predictable is framing it as a capitalism vs socialism issue to trigger and distract while the corruption continues unabated.

The problem is healthcare is critical. If your checks and balances and idealised system does not work you risk letting people feed on others desperation and create demons. And these sociopaths then multiply within your society killing it from within. This is the biggest argument for socialized healthcare.

There are so many reasons the EpiPen costs $318, corporate greed being one of them. One of the huge reasons that no one talks about is that most rarely actually sell for $318. It's priced at that, but insurance companies negotiate a lower (unpublished) price in most/all insurance purchases. It's only those with no insurance or who are buying it without insurance that pay the full price.

This is true for nearly all drugs, medical equipment, or medical procedures in the US. This is one of the huge problems with our system. Everyone puts a huge price-tag on their stuff knowing that insurance with negotiate down.

It needs to be proved to work, which is rightly arduous. Unlike in (most)software, you can just fix it later. Defects kill. There needs to be a high bar of evidence to prove that:

A) the drug works

B) It doesn't cause your face to melt off

C) its reliable.

All of this is costly, Now, you have two choices, nationalise your drug R&D and charge a uniform cost spread over all drug classes, or through general taxation. Or Sweep away all your regulations on drug prices and start again. (like why the fuck is medicaid not allowed to collectively bargain on price? that's taxpayer subsidy right there...)

In the UK there is a thing called NICE, which is semi autonomous and run by people who can understand stats (ie not politicians) its job is to evaluate the cost of drugs, and crucially the effectiveness of all drugs prescribed within the NHS.

Is the drug actually effective? (sure 50% more powerful, but it costs 190%, just double up the old one, etc etc)

does it provide value for money?

is it safe?

are all the questions they ask. If a drug fails the tests its either written out of guidelines, or more unusually its banned.

Pretty neat. I wondered why you could just use an autoinjector like diabetics use (answer you need a larger diameter needle). Still easily doable and its all off the shelf made by medical device manufacturers and drug makers so not so much "DIY" as "repurposing existing medical gear to be more versatile".

How much does it cost to get and maintain FDA approval for marketing the EpiPen? What are the financial costs of the legal risks you are taking by selling it to patients? In other words, if it's so lucrative, why isn't anyone else doing it?

Thats like saying pirated software exposes the greed of software companies. I don't think that anybody believes that EpiPens themselves are very expensive at all - just like software, the cost comes from the cost of development, which in this market consists mostly of regulatory compliance and approval. If it were easy to bring an epinephrin injector to market, Teva would have already done so and Sanofi wouldn't have had to recall theirs. If there were more auto injectors on the market, the prices would go down.

The outrageous price of EpiPens is not a result of corporate greed so much as a failure of the FDA and Congress - but mostly Congress, the FDA is their subordinate. They failed to promogulate rules that maintained a competitive market for epinephrin auto injectors.

Watch the video. All described is loading epinephrine into an autoinjector. This is great because it suggests the barrier to competition is relatively low hanging fruit for those already in the drug delivery markets.

If the product is so expensive, and someone can make a competing product for less viably I find it hard to believe that it hasn't been done. A more fair comparison would be "medical aid which wasnt subjected to the same regulations and testing is cheaper to make and distribute" aka Corporate greed.

If someone knows how to make a product for $30 that the competition charges $300 for, why not go into business and undercut their price by a huge margin? Millions of users' lives would be instantly improved with dramatically cheaper epipens. That will do far more to combat greed than a blog post.

However, I think that if someone were to try this, they'd find there are many more costs involved than the raw ingredients and it might not be quite so simple to massively undercut the competition. But still, they should go for it! Competition is is the best medicine for over priced goods.

One thing I hate is that essentially all vector graphics and text rendering (Cairo, Quartz, MS Windows, Adobe apps, ...) is done with gamma-oblivious antialiasing, which means that apparent stroke width / text color changes as you scale text up or down.

This is why if you render vector graphics to a raster image at high resolution and then scale the image down (using high quality resampling), you get something that looks substantially thinner/lighter than a vector render.

This causes all kinds of problems with accurately rendering very detailed vector images full of fine lines and detailed patterns (e.g. zoomed out maps). It also breaks WYSIWYG between high-resolution printing and screen renders. (It doesnt help that the antialiasing in common vector graphics / text renderers are also fairly inaccurate in general for detailed shapes, leading to weird seams etc.)

But nobody can afford to fix their gamma handling code for on-screen rendering, because all the screen fonts we use were designed with the assumption of wrong gamma treatment, which means most text will look too thin after the change.

* * *

To see a prototype of a better vector graphics implementation than anything in current production, and some nice demo images of how broken current implementations are when they hit complicated graphics, check this 2014 paper: http://w3.impa.br/~diego/projects/GanEtAl14/

Um. Curiously, that first example didn't work for me. Figures 1 & 2, under "Light emission vs perceptual brightness" are compared thus: "On which image does the gradiation appear more even? Its the second one!"

Except that for me it isn't. The first one, graded by emission rather than perception, appears more evenly graded to me. There is no setting I can find using the Apple calibration tool (even in expert mode) that does anything but strengthen this perception.

This raises only questions. Is this discrepancy caused by my Apple Thunderbolt Display? By my mild myopia? The natural lighting? My high-protein diet? The jazz on the stereo? The NSA? Or do I really have a different perception of light intensity?

And is anyone else getting the same?

Note: I have always had trouble with gamma correction during game setup; there has never been a setting I liked. Typically there'll be a request to adjust gamma until a character disappears, but however I fiddle things it never does.

Something that is important to note is that in photoshop the default is gamma incorrect blending.

If you work on game textures, and especially for effects like particles, it's important that you change the photoshop option to use gamma correct alpha blending. If you don't, you will get inconsistent results between your game engine and what you author in photoshop.

This isn't as important for normal image editing because the resulting image is just being viewed directly and you just edit until it looks right.

Enough has been said about incorrect gamma (this and [0]), now I think it's high time to bash the software of the world for incorrect downscaling (e.g. [1]). It has much more visible effects, and has real consequences for computer vision algorithms.

In the course on computer vision in my university (which I help teaching) we teach this stuff to make students understand physics, but at the end of the lecture I'd always note that for vision it's largely irrelevant and isn't worth the cycles to convert image to the linear scale.

I tried viewing the article on 4 different monitors. All monitors had default settings except for brightness. Monitors A & B were on new laptops, monitor C was on a very old laptop, and monitor D was on a smartphone. Here are the results:

FIGURES 1 & 2. On monitor A, all bands of color in figure 1 were easily discernible. The first four bands of color in figure 2 looked identically. Figure 1 looked more evenly spaced than figure 2. On monitor B, all bands of color in figure 1 were easily discernible. The first five bands of color in figure 2 looked identically. Figure 1 looked more evenly spaced than figure 2. On monitor C, all bands of color except the last two in figure 1 were easily discernible. The first three bands of color in figure 2 looked identically. Figure 1 looked about as evenly spaced as figure 2. The result from monitor D was the same as the result from monitor A.

FIGURE 12. On monitors A and B, the color of (A) was closer to (B) than to (C). On monitor C, (A) appeared equally close in color to (B) and (C). On monitor D, the color of (A) was exactly identical to (B).

CONCLUSION: On monitor C, gamma correction had neutral effect. On all other monitors, the effects were negative. Unfortunately, I was unable to find a standalone PC monitor for my comparison. It is entirely possible that a PC monitor would give a different result. However, since most people use laptops and tablets nowadays, I doubt the article's premise that "every coder should know about gamma".

I was going to comment snarkily: "Really? Every coder? What if you program toasters?"

Then it immediately occurred to me that a toaster has some binary enumeration of the blackness level of the toast, like from 0 to 15, and this corresponds to a non-linear way to the actual darkness: i.e. yep, you have to know something about gamma.

This is one of the most fascinating articles I've come across on HN, and so well explained, so thank you.

But I wonder about what the "right" way to blend gradients really is -- the article shows how linear blending of bright hues results in an arguably more natural transition.

Yet a linear blending from black to white would actually, perceptually, feel too light -- exactly what Fig. 1 looks like -- the whole point is that a black-to-white gradient looks more even if calculated in sRGB, and not linearly.

So for gradients intended to look good to human eyes, or more specifically that change at a perceptually constant rate, what is the right algorithm when color is taken into account?

I wonder if relying just on gamma (which maps only brightness) is not enough, but whether there are equivalent curves for hue and saturation? For example, looking at any circular HSV color picker, we've very sensitive to changes around blue, and much less so around green -- is there an equivalent perceptual "gamma" for hue? Should we take that into an account for even better gradients, and calculate gradients as linear transitions in HSV rather than RGB?

I think the deep underlying problem is not just handling gamma but that to this day the graphics systems we use make programs output their graphics output in the color space of the connected display device. If graphics system coders in the late 1980-ies and early 1990-ies would have bothered to just think for a moment and look at the existing research then the APIs we're using today would expect colors in linear contact color space.

Practically all the problems described in the article (which BTW has a few factual inaccuracies regarding the technical details on the how and why of gamma) vanish if graphics operations are performed in a linear contact color space. The most robust choice would have been CIE1931 (aka XYZ1931).

Doing linear operations in CIE Lab also avoids the gamma problems (the L component is linear as well), however the chroma transformation between XYZ and the ab component of Lab is nonlinear. However from a image processing and manipulation point of view doing linear operations also on the ab components of Lab will actually yield the "expected" results.

The biggest drawback with contact color spaces is, that 8 bits of dynamic range are insufficient for the L channel; 10 bits is sufficient, but in general one wants at least 12 bits. In terms of 32 bits per pixel practical distribution is 12L 10a 10b. Unfortunately current GPUs experience a performance penality with this kind of alignment. So in practice one is going to use a 16 bits per channel format.

One must be aware that aside the linear XYZ and Lab color spaces, even if a contact color space is used images are often stored with a nonlinear mapping. For example DCI compliant digital cinema package video essence encoding is specified to be stored as CIE1931 XYZ with D65 whitepoint and a gamma=2.6 mapping applied, using 12 bits per channel.

The thing that seems a bit weird to me is that the constant light intensity graduation (fig 1) appears much more even/linearly monotonic to me than the perceptual one (fig 2) which seems really off at the ends, kind of sticking to really really dark black for too long at the left end, shifting to white too fast at the right end.

This is very good and useful; I'll have to update my ray-tracer accordingly.

One thing not discussed though is what to do about values that don't fit in the zero-to-one range? In 3-D rendering, there is no maximum intensity of light, so what's the ideal strategy to truncate to the needed range?

Good reminder about these persisting blending issues in the linear interpretation of RGB values, which was well explained to non-coders as well here in a quite popular MinutePhysics video:https://www.youtube.com/watch?v=LKnqECcg6Gw

As others commented the gamma scaling issues seem even more relevant.

Just please, don't use RGB color space for generating gradients. In fact, it's ill fitted for most operations concerning the perception of colors as is.

Interesting excursion: historically the default viewing gammas seem to have lowered, because broadcasting defaulted to dimly lit rooms, while today's ubiquitous displays are usually in brighter environments.

I get that this is an article about gamma, but it should have mentioned that sRGB is on the way out. People who need to think about gamma also need to think about wider color spaces like DCI-P3, which the Apple ecosystem is moving to pretty quickly (and others would be dumb to not follow).

What is usually little mentioned is that the transfer function for LCDs is a sigmoid rather than an exponential. The latter is simulated for desktop displays to maintain compatibility with CRTs. Embedded LCDs don't usually have this luxury.

I'm divided; I really want the article to be true, and for everyone to realise what a whole mistake we've been making all along... but, as the legions of us who don't adjust for gamma demonstrate, ignoring it doesn't make the world end?!

Does it mean that when doing a conversion from sRGB encoding to physical intensity encoding we have to extend the number of bits to encode the physical intensity values to avoid rounding errors in the sRGB encoding ?

I guess that that the required number of bits to encode physical intensity values depends on the operations. performed. The author suggest using floats, but this means 3x4 bytes and 4x4 bytes with the alpha channel. Would 16 bit unsigned integer be enough ? Floats are ok when using graphic cards, but not ok when using the processor.

> The graphics libraries of my operating system handle gamma correctly. (Only if your operating system is Mac OS X 10.6 or higher)

Not just OS X. The majority of Linux games from the past 2 decades including all SDL and id tech 1-3 games relied on X server's gamma function. An X.Org Server update broken it about 6 years ago. It was fixed a few weeks ago.

> sRGB is a colour space that is the de-facto standard for consumer electronic devices nowadays, including monitors, digital cameras, scanners, printers and handheld devices. It is also the standard colour space for images on the Internet.

Ok, does that mean that the device performs the gamma-transformation for me, and I don't need to worry about gamma?

When viewing this on a macbook air, the discussion around the two images in the section "Light emission vs perceptual brightness" appears weird. To me, the first image appears linearly spaced and in the second image I can hardly make out the difference between the first few bars of black.

Beautiful description and great examples. One thing confuses me. I'm actually using PS CS5 (supposedly the last 'correct' one?) and resizing figure 11 to 50% actually results in B, not C. Is there an option/setting I can use to fix this?

Did anyone else think the first set of bars was linear not the second? I could not notice any difference between the leftmost three bars on the bottom section. Or does this relate to how iPad renders images or something? ed: Same issue on PC.

As a pedantic writer, it annoys me that the article starts by mentioning a quiz and making a big deal about answering yes or no to the questions... but there aren't actually any questions. The "quiz" is a list of statements. Each one can be understood by context to imply a question about whether you agree with the statement, but it's distracting because you can't answer yes to something that isn't a question.

Despite the assumption of some newer open-source developers thatsending a pull request on GitHub automatically licenses thecontribution for distribution on the terms of the projects existinglicensewhat Richard Fontana of Red Hat callsinbound=outboundUnited States law doesnt recognize any suchrule. Strong copyright protection, not permissive licensing, is thedefault.

That isn't quite what I mean by "inbound=outbound". Rather,inbound=outbound is a contribution governance rule under which inboundcontributions, say a pull request for a GitHub-hosted project, aredeemed to be licensed under the applicable outbound license of theproject. This is, in fact, the rule under which most open sourceprojects have operated since time immemorial. The DCO is one way ofmaking inbound=outbound more explicit, and I increasingly think onethat should be encouraged (if only to combat the practice of usingCLAs and the like). But under the right circumstances it works even where the contribution is notexplicitly licensed (I think this is what Kyle may bequestioning). There are other ways besides the DCO of creating greatercertainty, or the appearance of greater certainty, around the inboundlicensing act, such as PieterH's suggestion of using a copyleftlicense like the MPL, or the suggestion of using the Apache License2.0 (whose section 5 states an inbound=outbound rule as a kind ofcondition of the outbound license grant).

This is a really good article. There's one part in particular that struck me:

"Despite the assumption of some newer open-source developers that sending a pull request on GitHub automatically licenses the contribution for distribution on the terms of the projects existing licensewhat Richard Fontana of Red Hat calls inbound=outboundUnited States law doesnt recognize any such rule. Strong copyright protection, not permissive licensing, is the default."

In other words the fork + pull request + merge flow does not work on a project unless you have an explicit step like a CLA, or an alternative solution.

We faced this problem early on in ZeroMQ, that asking contributors to take this extra step increased the work for maintainers (to check, is this the first time person X contributes, and have they made a CLA?) It also scared off contributors from businesses, where this often took approval (which took time and was often denied).

Our first solution in ZeroMQ was to ask contributors to explicitly state, "I hereby license this patch under MIT," which let us safely merge it into our LGPL codebase. Yet, again, another extra step and again, needs corporate approval.

Our current solution is I think more elegant and is one of the arguments I've used in favor of a share-alike license (xGPL originally and MPLv2 more these days) in our projects.

That works as follows:

* When you fork a project ABC that uses, say, MPLv2, the fork is also licensed under MPLv2.

* When you modify the fork, with your patch, your derived work is now also always licensed under MPLv2. This is due to the share-alike aspect. If you use MIT, at this stage the derived work is (or rather, can be) standard copyright. Admittedly if you leave the license header in the source file, it remains MIT. Yet how many maintainers check the header of the inbound source file? Not many IMO.

* When you then send a patch from that inbound project, the patch is also licensed under MPLv2.

* Ergo there is no need for an explicit grant or transfer of copyright.

I wonder if other people have come to the same conclusion, or if there are flaws in my reasoning.

> The phrase arising from, out of or in connection with is a recurring tick symptomatic of the legal draftsmans inherent, anxious insecurity.

Indeed. I'm trying to imagine a court saying "Well... there were damages, but they arose out of the software and not from the software, so therefore- oh, wait! The license actually includes arising "out of" the software as well as "from" the software, so I guess the limitation of liability stands. Case dismissed!"

A bit off-topic, but I would be very interested in somebody making a case for why an OS license is better than a simple line like "This code is free for everybody to use as they wish." I've read about it plenty, but remain unconvinced.

The OS software I write is for the good of everybody, not just its own popularity or the OS community. I'm fine with all uses of it, in whole or in part, whether or not I'm credited. The license reproduction requirement therefore feels like unnecessary noise, and I'd like to think that courts are sane enough that the warranty disclaimer is unnecessary too - is there any real court case where somebody has been sued for a defect in free, OS software, without an explicit warranty, and lost?

These licenses have little flaws upon closer examination. One day I was reading the BSD license closely, in the context of its use in the TXR project, and was astonished to find that it was buggy and required tweaking to make it more internally consistent and true to its intent. I added a METALICENSE document where the changes are detailed:

The main problem is that the original says that both the use and redistribution of the software are permitted provided that the "following conditions are met", which is followed by two numbered conditions, 1 and 2. But the two conditions do not govern use at all; they are purely about redistribution! Rather, the intended legal situation is that use of the software means that the user agrees to the liability and warranty disclaimer (which is not a condition). But the BSD license neglects to say that at all; it says use is subject to the conditions (just that 1 and 2), not to the disclaimer.

Licenses like MIT and BSD should be avoided due to the patent risk in favor of licenses that explicitly grant patent protection like Apache 2.0. The patent troll risk is just way too high. As rando said, companies like Microsoft are even open-sourcing code while raking in hundreds of millions from patent suits against open-source software. That this is even working for them shows the permissive licenses need to eliminate that tactic entirely.

A very lovely article, I enjoyed it very much since it gives insight into the "syntax" of legal documents in the US.

This article brought up a point I find very interesting. The MIT license (and a bunch of other licenses as well) are very US-oriented when it came to their writing, provisions, etc. I'd love to read a similar article exploring licenses like these from a, say, European point of view. Would the same constructs hold up in a German court, for example. What language is missing or superfluous?

Chrome, on my system, is even more abusive. Watch the size of the .config/google-chrome directory and you'll see that it grows to multi-GB in the profile file.

There is a Linux utility that takes care of all browsers' abuse of your ssd called profile sync daemon, PSD. It's available in the debian repo or [1] for Ubuntu or [2] for source. It uses `overlay` filesystem to direct all writes to ram and only syncs back to disc the deltas every n minutes using rsync. Been using this for years. You can also manually alleviate some of this by setting up a tmpfs and symlink .cache to it.

Hi, Im one of the Firefox developers who was in charge of Session Restore, so Im one of the culprits of this heavy SSD I/O. To make a long story short: we are aware of the problem, but fixing it for real requires completely re-architecturing Session Restore. Thats something we havent done yet, as Session Restore is rather safety-critical for many users, so this would need to be done very carefully, and with plenty of manpower.

I hope we can get around to doing it someday. Of course, as usual in an open-source project, contributors welcome :)

I have been running Firefox for a long time with an LD_PRELOAD wrapper which turns fsync() and sync() into a no-op.

I feel it's little antisocial for regular desktop apps to assume it's for them to do this.

Chrome is also a culprit, a similar sync'ing caused us problems at my employer's, inflated pressure on an NFS server where /home directories are network mounts. Even where we already put the cache to a local disk.

At the bottom of these sorts of cases I have on more than one occasion found an SQLite database. I can see its benefit as a file format, but I don't think we need full database-like synchronisation on things like cookie updates; I would prefer to lose a few seconds (or minutes) of cookie updates on power loss than over-inflate the I/O requirements.

Even better just disable session restore entirely - Browser.sessionstore.enabled - Since Firefox 3.5 this preference is superseded with setting browser.sessionstore.max_tabs_undo and browser.sessionstore.max_windows_undo to 0.

As I understand this feature is there so if the browser crashes it can restore your windows and tabs - I don't remember having a browser crash on me since the demise of Flash.

This is a far superior solution to fiddling with configuration options in each individual product to avoid wearing down your SSD with constant writes. Murphy's law has it such hacks will only be frustrated by next version upgrade.

And no, using Chrome does not help. All browsers that use disk caching or complex state on disk are fundamentally heavy on writes to an SSD. The amount of traffic itself is not even a particularly good measure of SSD wear, since writing a single kilobyte of data on an SSD can not be achieved on HW level without rewriting the whole page, which is generally several megabytes in size. So changing a single byte in a file is no less taxing than a huge 4 MB write.

Maybe I am not understanding this right, but is this saying that Firefox will continually keep writing to the disk while idle? Does anyone know more about this? Why would this be needed to restore session/tabs? Seems like it should only write after a user action or if the open page writes to storage? Even if it was necessary to write continually while idle, how could it possibly consume so much data in such a short period of time?

I still think the worry about it wearing out an SSD is overblown. The 20GB per day of writes is extremely conservative and mostly there to avoid more pathological use cases. Like taking a consumer SSD and using it as the drive for some write heavy database load with 10x+ write amplification and when you wear it out demand a new one on warranty.

Backing up the session is still sequential writes so write amplification is minimal. After discovering the issue I did nothing and just left Firefox there wearing on my SSD. I'll still die of old age before Firefox can wear it out.

I checked my system - Firefox wasn't writing much and what it is writing is going to my user directory on the hard drive instead of the program directory on the SSD, so that's nice. But still, I don't want my browser cluttering up my drive with unnecessary junk - history, persistent caching from previous sessions, old tracking cookies, nevermind a constant backup of the state of everything. I try to turn all that off, but there's always one more hidden thing like this.

If I want to save something, I'll download it. If I want to come back, I'll bookmark it. Other than those two cases and settings changes, all of which are triggered by my explicit choice & action, it really shouldn't be writing/saving/storing anything. Would be nice if there were a lightweight/portable/'clean' option or version.

When I tried Spotify, it was pretty bad about that too - created many gigabytes of junk in the background and never cleaned up after itself. I made a scheduled task to delete it all daily, but eventually just stopped using spotify.

Yep, I have a brand new SSD drive that over the course of a few months accumulated several TERAbytes (yes - TERA) of writes directly attributable to the default FF browser session sync interval coupled with the fact I leave it open 24/7 with tons of open tabs.

Once I noticed that excessive writes were occurring, it was easy for me to identify FF as the culprit in Process Hacker but it took much longer to figure out why FF was doing it.

The interesting question here is, why is the browser writing data to disk at this rate?

If it's genuinely receiving new data at this rate, that's kind of concerning for those of us on capped/metered mobile connections. The original article mentions that cookies accounted for the bulk of the writes, which is distressing.

Firefox has been terrible for disk access for many years. I remember I had a post install script (to follow, I never actually automated) that I would run through in my linux boxes back in about 2003 that would cut down on this and speed up the whole system.

Basically chattr +i on a whole bunch of its files and databases, and everything's fine again...

I do wonder if their mobile version have a similar problem. I have noticed it chugs badly when opened for the first time in a while on Android, meaning i have to leave it sitting or a while so it can get things done before i can actually browse anything.

Firefox is relying too much on session restore to deal with bugs in their code. Firefox needs to crash less. With all the effort going into multiprocess Firefox, Rust, and Servo, it should be possible to have one page abort without taking down the whole browser. About half the time, session restore can't restore the page that crashed Firefox anyway.

Putting aside how this may not be all that bad for most SSD's, does anyone know when this behavior started?

Firefox really started to annoy me with its constant and needless updates a few months back; the tipping point being breaking almost all legacy extensions (in 46, I believe). This totally broke the Zend Debugger extension, the only way forward would be to totally change my development environment. I'm 38 and now, and apparently well beyond the days when the "new and shiny" hold value. These days I just want stability and reliability.

Firefox keeps charging forward and, as far as I can tell, has brought nothing to the table except new security issues and breaking that which once worked.

I haven't updated since 41 and you know what, it's nearly perfect. It's fast, does what I need it to do, and just plain old works.

Firefox appears to have become a perfect example of developing for the sake of.

Seriously, the default options to ssh-keygen should be all anybody needs. If you need to pass arguments to increase the security of the generated key, then the software has completely failed its purpose. Passing arguments should only be for falling back on less secure options, if there is some limiting factor for a particular deployment.

There is absolutely no reason to pass arguments to ssh-keygen. If it is actually deemed necessary to do so, then that package's installation is inexcusably broken.

Something I don't understand is the "hate" that RSA gets. Yeah, Elliptic Curves are promising, have benefits (smaller/faster).

But RSA isn't broken, it is well understood, is "boring" (a plus on security, usually), has bigger bit sizes (according to people that know a lot more to me that's a plus point, regardless of EC requiring smaller ones, because of certain attacks), isn't hyped and sponsored by the NSA and isn't considered a bad choice by experts.

Not too many years ago Bruce Schneier was skeptical about EC, because of the NSA pushing for it. Now, I also trust djb and i an sure that ed25519 is a good cipher and there are many projects, like Tor that actually benefit from it, increasing throughput, etc., but for most use cases of SSH that might not be the issue, nor the bottleneck.

So from my naive, inexperienced point of view RSA might seem the more conservative option. And if I was worried about security I'd increase the bit size.

I disagree with the author. Before you go upgrading into ed25519, beware that the NSA/NIST is moving away from elliptical curve cryptography because it's very vulnerable to cracking with quantum attacks[0].

"So let me spell this out: despite the fact that quantum computers seem to be a long ways off and reasonable quantum-resistant replacement algorithms are nowhere to be seen, NSA decided to make this announcement publicly and not quietly behind the scenes. Weirder still, if you havent yet upgraded to Suite B, you are now being urged not to. In practice, that means some firms will stay with algorithms like RSA rather than transitioning to ECC at all. And RSA is also vulnerable to quantum attacks."

Stick with the battle tested RSA keys, which are susceptible but not as much as ECC crypto. 4097 or even better 8192-bit lengths.

There's no perceptible user benefits to using ed25519 and it's not even supported everywhere. Also you won't have to rotate all of your keys when workable quantum computers start crackin' everything.

Noob question here, why move just one step ahead. Why not 8192 or hell 16,384? I can see it can lead to higher CPU consumption on often used keys but for keys that are not accessed more than a couple of times a day, why is it such a bad idea to overdo it?

So I once read somewhere that RSA is simpler to implement than most other algorithms, and hence it's a safer choice than other algorithms, because weaknesses typically come from suboptimal implementation less than from the cryptographic algorithm. (Unless you use known-broken things like md5 or 3DES).

And I think that was in the context of some DSA or ECDSA weakness, possibly a side channel attack or something similar. I forgot the details :(

What are your thoughts on this? Should we focus more simplicity and robustness of the implementation, rather than just the strength of the algorithm itself?

I notice a lot of negativity arround here. Don't know why that is... But i'll take my 5 cents on it.

NIH - Not invented here and redoing an opensource project.

- Github said they used HAProxy before, i think the use case of github could very well be unique. So they created something that works best for them. They don't have to re-engineer an entire code base.When you work on small projects, you can send a merge request to do changes. I think this is something bigger then just a small bugfix ;). Totally understand them there for creating something new

- They used opensource based on number of open source projects including, haproxy, iptables, FoU and pf_ring. That is what opensource is, use opensource to create what suits you best. Every company has some edge cases. I have no doubt that Github has a lot of them ;)

Now,

Thanks GitHub for sharing, i'll follow up on your posts and hope to learn a couple of new things ;)

Given this is based on HAProxy and seems to improve the director tier of a typical L4/L7 split design, I'm led to believe GLB is an improved TCP-only load balancer.

But they also talk about DNS queries, which are still mainly UDP53, so I'm hoping GLB will have UDP load-balancing capability as gravy on top. I excluded zone transfers, DNSSEC traffic or (growing) IPv6 DNS requests on TCP53 because, at least in carrier networks, we're still seeing a tonne of DNS traffic that still fits within plain old 512-byte UDP packets.

Looking forward to seeing how this develops.

EDIT: Terrible wording on my part to imply that GLB is based off of HAProxy code. I meant to convey that GLB seems to have been designed with deep experience working with HAProxy as evidenced by the quote: "Traditionally we scaled this vertically, running a small set of very large machines running haproxy [...]".

I am increasingly bothered by the "not invented here" syndrome where instead of taking existing projects and enhancing them, in true open source fashion, people instead re-create from scratch.

It is then justified that their creation is needed because "no one else has these kinds of problems" but then they open source them as if lots of other people could benefit from it. Why open source something if it has an expected user base of 1?

Again, I am not surprised by this. They whole push of Github is not to create a community which works together on a single project in a collaborative, consensus based method, but rather lots of people doing their own thing and only occasionally sharing code. It is no wonder that they follow this meme internally.

While I understand that NIH syndrome is a real thing, it is very dissapointing to read many of the comments here.

I think very few HN readers are really in a position to have an informed opinion regarding Github's decision to build new piece of software rather than using an existing system.

Personally I find this area quite interesting to read about because it is very difficult to build highly available, scalable, and resilient network service endpoints. Plain old TCP/IP isn't really up to the job. Dealing with this without any cooperation from the client side of the connection adds to the difficulty.

Given the title and the length of the post I was expecting a lot more detail.

> Over the last year weve developed our new load balancer, called GLB (GitHub Load Balancer). Today, and over the next few weeks, we will be sharing the design and releasing its components as open source software.

Is it common practice to do this? Most recent software/framework/service announcements I've read were just a single, longer post with all the details and (where applicable) source code. The only exception I can think of is the Windows Subsystem for Linux (WSL) which was discussed over multiple posts.

They talk about running on "bare metal" but when I followed that link it looked like they were simply running under Ubuntu. Is it so much a given that everything is going to be virtualized?

When I think of "bare metal" I think of a single image with disk management, network stack, and what few services they want all running in supervisory mode. Basically the architecture of an embedded system.

I'm of two minds about this. Part of me agrees with many of the commenters here, in that Not Invented Here syndrome was probably in effect during the development of this. I don't really know Github's specific use case, and I don't know the various open source load balancers outside of Haproxy and Nginx, but I would be surprised if their use case hasn't been seen before and can be handled with the current software (with some modification, pull requests, etc.). On the other hand, I would guess Github would research into all of this, contact knowledgeable people in the business, and explore their options before spending resources on making an entirely new load balancer. Maybe it really is difficult to horizontally scale load balancing, or load balance on "commodity hardware".

That being said, why introduce a new piece of technology without actually releasing it if you're planning to release it, without giving a firm deadline? This isn't a press release, this is a blog post describing the technical details of the load balancer that is apparently already in production and working, so why not release the source when the technology is introduced?

I love using GitHub and appreciate the impact it is and has had. But this post is what is wrong with the web today. They have taken a distributed-at-it's-plumbing technology, and centralised it so much that now we need to innovate new load balancing mechanisms.

Years ago I worked at Demon Internet and we tried to give every dial up user a piece of webspace - just a disk always connected. Almost no one ever used them. But it is what the web is for. Storing your Facebook posts and your git pushes and everything else.

No load balancing needed because almost no one reads each repo.

The problem is it is easier to drain each of my different things into globally centralised locations, easier for me to just load it up on GitHub than keep my own repo on my cloud server. Easier to post on Facebook than publish myself.

But it is beginning to creak. GitHub faces scaling challenges, I am frustrated that some people are on whatsapp and some slack and some telegram, and I cannot track who is talking to me.

The web is not meant to be used like this. And it is beginning to show.

I am intrigued by their opening statement of multiple POPs, but the lack of multi-POP discussion further in the system description.

My understanding is that the likes of, for example, Cloudflare or EC2 have a pretty solid system in place for issuing geoDNS responses (historical latency/bandwidth, ASN or geolocation based DNS responses) to direct random internet clients to a nearby POP. Building such a system is not that difficult, I am fairly confident many of us could do so given some time and hardware funding.

Observation #1: No geoDNS strategy.

Observation #2: Limited global POPs.

Given that the inherently distributed nature of git probably makes providing a multi-pop experience easier than for other companies, I wonder why Github's architecture does not appear to have this licked. Is this a case of missing the forest for the trees?

Why not just use DNS load balancing over VIPs served by HA pairs of load balancers?

Back in the day we did this with Netscalers doing L7 load balancing in clusters, and then Cisco Distributed Directors doing DNS load balancing across those clusters.

It can take days/weeks to bleed off connections from a VIP that is in the DNS load balancing, but since you've got an H/A pair of load balancers on every VIP you can fail over and fail back across each pair to do routine maintenance.

That worked acceptably for a company with a $10B stock valuation at the time.

I know they mentioned their SYN flood tool but I recently saw a similar project from a hosting provider and thought it was neat [1]. It seems like everyone wants their own solution to this when it is a very common and non-trivial problem.

It is huge that a lawyer would disclose in a public setting such important confidential numbers. I even have trouble seeing how something like that could be "accidental". It is basically a force of habit among experienced litigators to think and to say, in any number of contexts, "I know this may be relevant but I can't discuss it because it is the subject of a protective order" or "I know the attorneys know this information but it was disclosed under the protective order as being marked for 'attorneys' eyes only'". In all my years of litigating, I don't believe I have ever heard a casual slip on such information, even in otherwise private contexts (e.g., attorneys are discussing with their own client what an adverse party disclosed and are very careful not to disclose something marked for "attorneys' eyes only"). Certainly willful disclosures of this type can even get you disbarred.

But the significance of this breach is not the only thing that caught my eye.

These litigants have been entrenched in scorched-earth litigation for years now in which the working M.O. for both sides is to concede nothing and make everything the subject of endless dispute. Big firm litigators will often do this. It is a great way to rack up bills. Clients in these contexts do not oppose it and very often demand it. And so a lot of wasteful lawyering happens just because everyone understands that this is an all-out war.

To me, then, it seems that the big problem here (in addition to the improper disclosures of highly important confidential information in a public court hearing) was the resistance by the lawyers who did this to simply acknowledging that a big problem existed that required them to stipulate to getting the transcript sealed immediately. Had they done so, it seems the information would never have made the headlines. Instead (and I am sure because it had become the pattern in the case), they could not reach this simple agreement with the other lawyers to deal with the problem but had to find grounds to resist and fight over it.

I know that we as outside observers have limited information upon which to make an assessment here and so the only thing we can truly say from our perspective is "who knows". Yet, if the surface facts reflect the reality, then it is scarcely believable that the lawyers could have so lost perspective as to take this issue to the mat, resulting in such damage to a party. Assuming the facts are as they appear on the surface, this would be very serious misconduct and I can see why Judge Alsup is really mad that it happened.

As background, this opinion piece by the lawyer in question may be useful in understanding the mindset of the players. Hurst argues that because API's are not copyrightable, the GPL is dead and Oracle's valiant attempts to defend free software have been foiled:

The Death of "Free" Software . . . or How Google Killed GPLby Annette Hurst (@divaesq)

The developer community may be celebrating today what it perceives as a victory in Oracle v. Google. Google won a verdict that an unauthorized, commercial, competitive, harmful use of software in billions of products is fair use. No copyright expert would have ever predicted such a use would be considered fair. Before celebrating, developers should take a closer look. Not only will creators everywhere suffer from this decision if it remains intact, but the free software movement itself now faces substantial jeopardy.

This wasn't an accidental "slip" by a poorly trained intern. This was a conscious disclosure made by one of Oracle's lead attorneys. She is one of the top IP lawyers in the nation: https://www.orrick.com/People/2/6/2/Annette-Hurst. It is in keeping with the "scorched earth" strategy that has been followed for this case. She knew what she was doing, and she (and her firm) should pay the consequences. If there are no consequences, it will legitimize and reward this strategy.

This article reads very weirdly to me. Are they arguing that disclosing confidential information, and subsequently opposing steps to contain the disclosed information, is perfectly fine because ... it can be found on the internet, precisely because of this disclosure? This makes absolutely no sense to me.

Having read this article it reminds me somewhat of tactics in movies where lawyers deliberately ask an inflammatory question in front of a jury purely for the purpose of planting a seed, and before anyone can yell objection they immediately retract knowing that the damage has been done. The judge may strike it from the record, the judge may tell the jury to disregard it, but you can't unthink or unhear something that's been said. The bell has already been rung.

I don't (or can't, I'm unsure) believe that lawyers of this caliber make mistakes like this. So what was her play by doing this? Did it pay off?

Off-topic, but I find it strange that money in the order of $1B can change hands between two mega-corporations without anyone outside having an inkling, while I could find websites saying exactly how much a low-level government worker earns in a social services center in my county. (Spoiler: much less than I used to earn as developer.)

Slightly off topic, but I've always had a hard wrapping my head around the stance the somehow an API is distinct from code. I understand that it's an abstraction in programming, and that industry practice has been that it's acceptable to take an existing API that you didn't create and write a new implementation.

But since the API is "implemented" in code, it seems like for the purpose of copyright consideration that the distinction is simply one of custom.

It's a programming abstraction, to create your own "implementation" of the API you still have to use code that is identical to original.

Alsop's original, overturned, ruling was that as a matter of law API's couldn't be copyrighted because they express an idea that can only be expressed exactly that way, and traditionally this would not be allowed (can't copyright an idea). As I understood it, his concept implied that to get IP protection over an API would require something more like patent protection. (I might be totally wrong on this).

The judge tried to reveal the depth of this revelation by comparing it to that of the most secret thing he could imagine:

> If she had had the recipe for Coca-Cola she could have blurted it out in this court right now.

(Seriously!)

EDIT: I wasn't trying to be snarky or silly, just pointing out an aspect of the story that struck me as funny. Serious request: if that is inappropriate, please let me know rather than just silently downvoting. In that case, I apologise and will delete the post.

Regardless of the outcome her career in litigating high profile cases is pretty much over. You simply do not utter highly confidential company information accidentally. It was intentional and it was done to paint a picture to the jury about how much money Google was making from Android and what it was paying Apple.

As someone who knows zilch about business, I don't quite understand why people knowing these numbers is so devastating. What will another company do with these two numbers that it otherwise wouldn't do?

God I hate that woman. When she was a US Attorney for SF, she went around and threatened to seize buildings where medical cannabis dispensaries were located, in full compliance of the local laws. Because she couldn't do any thing to the dispensary directly, she threatened their landlords. This was after Obama had said that DoJ would not interfere with dispensaries which were operating within the state laws.

How can a public corporation keep those two numbers secret? Those are basic cost and revenue numbers that should be disclosed in their annual financial statements. The fact that it's legal to keep those numbers secret means there's something very wrong with how we do financial disclosure in America.

If anything, my only sadness is that more of Google's dirty laundry wasn't aired. This illusion that Google search is winning because people prefer it and that Google doesn't make money on Android are both claims I'm happy to see debunked. Google's anti-monopoly claims fundamentally hinge on concepts like these.

And if a lawyer did break the law by doing it, I say she belongs on the same high pedestal people put Snowden on.

FWIW, the prospect of being suspected and questioned (but not necessarily raided) because of your IP location is one of the best metaphors to relate what it's like as a minority to be searched just because you are of the same race as a suspect in an nearby active case.

It is perfectly logical to say that if there was an assault on a college campus and that the victim said the perp is an "Asian male", for the police to not prioritize the questioning of all non-Asians in the area. And if the report was made within minutes of the incident and the suspect is on foot, it may be justifiable to target the 5 Asian males loitering around rather than the 95 people of other demographics. What logical person would argue otherwise?

But the problem creep comes in the many, many cases when police don't have a threshold for how long and wide that demographic descriptor should be used. Within 1000 feet of the reported attack? A mile? Why not 2 miles? And why not 2 days or even 2 weeks after the incident, just to be safe?

The main difference in the ISP/IP metaphor is that in the digital world, it's possible to imagine search-and-question tactics that aren't time-consuming for the police or for the suspect. Hell, the suspect might not even know their internet-records were under any suspicion. OTOH, there are definitely real-world places in which for the police (and their community and most specifically the politicians), hand-cuffing and patting someone down has been so streamlined and accepted by the powers-that-be that it isn't a bother for them (the police) either.

edit: To clarify, I don't mean to get in the very wide debate on racial profiling, etc. But when I worked at a newspaper, we had a policy to not mention race unless the police could provide 4 or 5 other identifiers. That led to readers cussing us out because, they'd argue, knowing that the suspect was black is better than nothing. My point here is that sometimes, nothing is not always better than something, and that is most explicitly clear when it comes to broad IP range searches.

A similar example, while not a raid, hit me closer to home a bit over a year ago.

I'm sure that if you follow US news at all, you heard about the looting and arson in Baltimore in the Spring of 2015. While the city was on edge in the wake of a citizen's death in police custody, there had already been some minor demonstrations and a brawl between protesters, baseball fans, and provocateurs downtown earlier in the month.

Then, on the day of the funeral held for the man killed in custody, word started to spread of plans for some sort of riot or mass havoc being planned later in the day. Later, authorities pointed to a digital "flyer" being passed around yet nobody investigating this outside of the police has found any source or initial copy of this flyer that dates before this was published in the media. Trust me, we looked.

In response to this alleged threat to public order, cops with riot gear and a freaking mini-tank showed up at a major public transit hub right as school let out. Transit was shut down and everyone was corralled into a small area next to a busy street and without a way home for hours.

Eventually, tensions got high enough that when the first pissed off teenager or whoever chucked a bottle or a rock, it didn't take long for others to join in. In the ensuing vandalism and arson, hundreds of thousands in damage was caused, people got hurt, the city was put under curfew for a week, and to this day, businesses and residents have suffered from the reputation gained (worsened?) that day.

Looking back, the part that really sticks out to me is how the whole thing was triggered (assuming you don't think it was a deliberate provocation) by some "social media flyer" that claimed some teens were planning to run around starting shit after school. This rumor summoned riot police, shut down transit, stranded loads of adults and teens alongside the road, and facing down a phalanx of police plus one armored tactical vehicle.

Would those shops and homes still been damaged or those stores been looted and burned in a wave of unrest without this rumor-inspired flashpoint? No idea. But it sure didn't help.

> If police raided a home based only on an anonymous phone call claiming residents broke the law, it would be clearly unconstitutional... Yet EFF has found that police and courts are regularly conducting and approving raids based on the similar type of unreliable digital evidence: Internet Protocol (IP) address information.

I'm not sure that these two are equivalent. A better example would be the police raiding my home based on an illegal phone call that came from my phone number. Sure, the fact that it comes from my phone number doesn't mean I did it, but it's certainly evidence that points to me, just as an IP address can be.

In general, the summary linked to above makes it sound like police should never use IP addresses. To be fair, if you read the whitepaper itself, it doesn't say this, but rather that police should be _careful_ in how they use IP addresses. Specifically, it recommends that police "conduct additional investigation to verify and corroborate the physical location of a particular decive connect to the Internet whenever police have information about an IP address physical location, and providing that information to the court with the warrant application".

In the 1980's, some powerful senator's cell phone was snooped on, resulting in a major scandal when the contents of his phone calls was revealed in the press.

This resulted in Congress passing laws that made it illegal for radios to be capable of listening in on cell phone frequencies or being easily modified to allow them to do so.

It is likely that only similar widely publicized embarrassments and privacy violations of the rich and powerful will result in any meaningful legislative attempts to curtail the growth of the police state in the United State.

They clearly don't intend to do much about it unless they themselves are the victims of such abuses of power. As long as it's just "nobodies" or social or political outcasts who are the victims the police and surveillance aparatus, it's doubtful that much will change.

The one I'm familiar with is the Sarasota, FL incident, where a married couple was raided in the middle of the night in response to alleged child pornography. Their unit was in a condominium, practically on the edge of Sarasota bay, where various boats moor and dock. After further investigation, it was discovered that the traffic had originated from some guy in a boat using a high gain antenna. If I remember correctly, he had cracked their WEP key and illegally accessed their network to obtain nasty images, lots of them. The insecurity of WEP has been known about for a long time, presumably by LE too.

It is conjecture on my part, but a few things come to mind regarding alternative methods of investigation that may have avoided this. 1. Contact the ISP first (in this case I think it may have been Verizon). I remember Verizon having the ability to remotely reset router passwords, which possibly suggests the ability to remotely view associated client data, e.g. MAC addresses and hostnames and maybe even OS. This may have provided valuable clues. 2. Note the protocol used by the wireless router. 3. Wardrive a bit. 4. Maybe check for logs of any accounts the boat guy logged into while on their network.

Regardless, the raid was botched and pretty traumatic for the couple, considering they were operating a legal AP probably secured with what they thought was adequate encryption. At the time of this event, WEP was standard default, straight from the ISP. They'd done nothing wrong.

> If police raided a home based only on an anonymous phone call claiming residents broke the law, it would be clearly unconstitutional.

> Yet EFF has found that police and courts are regularly conducting and approving raids based on the similar type of unreliable digital evidence: Internet Protocol (IP) address information.

When police go after an IP address, it happens after there is evidence linking it to some crime. That makes the situation wholly unlike an anonymous phone call, where there is no evidence a crime has even been committed, and where the identifying information itself is trivial to falsify.

Also, IP addresses give a lot more information than the article implies. Especially these days now that everyone has a home router that probably keeps the same IP address for weeks at a time if not months. Not enough to trigger a police raid, of course (if we want to argue that the police have too low a standard of evidence for initiating a raid, I agree) but it's probably a good lead to go on in the common case.

> Put simply: there is no uniform way to systematically map physical locations based on IP addresses or create a phone book to lookup users of particular IP addresses.

Maybe today, but when we have wide deployment of IPv6 (heh), won't ISPs do away with NATing and give everyone their own block of IPs? Then I would think you could reliably tie a person to an IP address as long as the ISP cooperates.

(1) It's unreliable (2) It's unconstitutional assuming judges agree (3) It's expensive if you screw it up, such as people die, lawsuits, or embarrassment. All of which is unlikely change behavior unless everyone agrees.

I'll just point this out here. Reena Virk started as a rumour going around in schools. Until eight days later her body was found. A little bit of prudence is necessary, but don't discount rumours out of hand.

Cops have limited resources to deal with a number of problems and if they don't have the training and procedures to use internet evidence they are going to waste those resources tracking down stolen cars, child porn and whatever in the wrong places.

>Law enforcements over-reliance on the technology is a product of police and courts not understanding the limitations of both IP addresses and the tools used to link the IP address with a person or a physical location.

You can most certainly narrow down an IP address to a particular ISP customer. Is it possible that they have an open wifi? Yes. Is it possible to narrow it down to a single member of the household? Depends! Is it possible that a computer at the destination is being used a proxy by the real attacker? Yes! But it's certainly not the blackbox that the EFF is trying to portray it as.

It's totally appropriate to execute a search warrant based on IP logs. A search warrant doesn't mean that any particular person is guilty, just that there is probable cause that there is information about a crime at a certain location.

> IP address information was designed to route traffic on the Internet, not serve as an identifier for other purposes.

I think you're going to have a hard time here convincing a jury or judge with this argument. In general LOE isn't concerned with the intentional of what an IP address was meant for. At least with today's ISP an IP address can be a reasonable approximation of a person or persons.

Their spin that it is "our super advanced Intel RAID chipset" really plays in their favor, given that their BIOS uses a single goto statement to intentionally block access to the ability to set this chipset into the AHCI compatible mode that the hardware so readily supports, as evidenced by the REing work and the fact that other OSes detect the drive after the AHCI fix using the custom-flashed BIOS.

So, why are they reluctant to just issue their band-aid patch to the BIOS -- after all, it's really the path of least resistance here?

Yes, there has been some deflection of blame here. The argument that every single OS except Windows 10 is at fault for not supporting this CRAZY new super advanced hardware doesn't make much sense.

"Linux (and all other operating systems) don't support X on Z because of Y" doesn't really apply when "Z modified Y in a way that does not allow support for X."

To state it more plainly, this "CRAZY new super advanced hardware" has a trivial backwards compatible mode that works with everything just fine, but it is blocked by Lenovo's BIOS.

It was a shame to see the initial posts this morning hit the top of the page without any more evidence than a single customer support rep. who was unlikely to realistically have inside knowledge of some kind of "secret conspiracy" to block linux installs by Microsoft.

There has been a disturbing level of contempt for the people that were concerned about the future of Free Software. There has been a major shift towards more locked down platforms for years ever since iOS was accepted by the developer community. With Microsoft locking down Secure Boot on ARM and requiring it for Windows 10, it is prudent to be extra vigilant about anything strange that happens in the boot process. The alternative is to ignore potential problems until they grow into much larger problems that are harder to deal with.

Obviously vigilance implies some amount of false positives. It is easy to dismiss a problem once better information is available. It's great that this Lenovo situation is simply a misunderstanding about drivers, but that doesn't invalidate the initial concern about a suspicious situation.

Wasn't Lenovo the company that shipped unremovable malware with laptops? Considering the almost impossible to disable Intel management stuff is also there, I can only imagine the kind of parasite living on these machines.

For what it's worth, I've had issues with Intel RST under Windows as well in mixed-mode configs. My boot device is an SSD configured for AHCI and I've a 3 drive RAID array. On a soft reset of my PC, the BIOS won't see the SSD. The completely nonobvious solution? Make the SSD hot swappable. Not a Lenovo PC, either. Been going on for years. Had to do a hard reset every time I had to restart for years before I found a solution to this.

What is crazy to me is that Lenovo is usually the brand that people recommend for Linux laptops. They are shooting themselves in the foot here. They may think that the number of people on Linux is too small, but I bet it is bigger than they think. It is just that there is no easy way to accurately census the amount of Linux users on their HW.

Pushing Intel to provide the drivers or at least documentation would be the best solution - the BIOS lock would become irrelevant.

However, I don't agree with conclusion that Lenovo isn't to blame. They went out of their way to ensure that even power users playing with EFI shell won't be able to switch to AHCI mode.

I don't care about Microsoft here. Lenovo showed its bad side and I probably won't be buying their devices anymore - which is a pity, as I'm writing this on my Yoga 2 Pro, with my company's Yoga 900 (fortunately older, unblocked revision) nearby and I liked those devices.

Yeah, sure, Microsoft is now all white and fluffy. Best friends forever.

How about we pay some attention to the second part of:

Lenovo's firmware defaults to "RAID" mode and ** doesn't allow you to change that **

Power savings or not, but locking down storage controller to a mode that just happens to be supported by exactly one OS has NO obvious rational explanation. Either Lenovo does that or Windows does. This has nothing to do with Intel.

It sounds to me like it would be quite trivial to run Linux on this laptop, just by treating the "RAID" mode PCI ID like AHCI and employing the regular driver. I believe Linux supports forcing the use of a driver for a PCI device.

However, it would appear that the version of mdadm in shipping versions of Ubuntu (at least - maybe other distros too) doesn't support the Smart Response Technology (SRT - http://www.intel.com/content/www/us/en/architecture-and-tech... ) feature that's a part of RST and is used by Lenovo to build a hybrid one-stripe RAID0 device from the HDD with a cache on the SSD (I'm sure Lenovo have a good reason for not using a SSHD). Dan Williams of Intel submitted a series of patches to mdadm to support SRT back in April 2014: https://marc.info/?l=linux-raid&r=1&b=201404&w=2 . Perhaps now there's shipping hardware that requires them, there'll be the impetus for distro vendors to get them integrated into mdadm, and their auto-detection in their installers to use the functionality provided sanely.

---

I should add that mdadm is not present in Ubuntu live images by default - one has to pull it in by issuing "sudo apt[-get] install mdadm". BTW, I don't know if mdadm would detect the RAID controller/disk immediately upon installation, or it would require a reboot. In the latter case you may wish to use a USB key with enough spare room to save the system status and reboot. I'd use UNetBootin to prepare such a USB key.

The main issue here is, a user who doesn't even see a disk, probably wouldn't know to go as far as installing mdadm.IMHO, given the broadening diffusion of NVMe and RAID devices, Debian, Canonical, REDHAT, Fedora etc. might wish to make mdadm part of their live images by default (and eventually strip it from the installed system if it's unnecessary).

Seeing a manufacturer use fake RAID, by default, on a single disk system, then unfathomably hardwiring this into the firmware so it can't be changed, then have a Lenovo rep actually admit the reason with the forum thread censored and then see this kind of defence is downright hilarious.

Garrett should be condemning Lenovo for not making a perfectly configurable chipset feature....configurable and defending Linux and freedom of choice on hardware that has always traditionally been that way. But, no, he doesn't. He defends stupidity as he always does.

Oh it's funny to see the comments in this thread talking down about people on reddit when the misplaced outrage was just as loud here. In fact, I got buried here for pointing out that the claim was BS and unrelated to SecureBoot where at least Reddit took it thoughtfully and realized it was probably just a bullshit statement from a nobody rep that got blown out of proportion.

"For a consumer device, why would you want to? The number of people buying these laptops to run anything other than Windows is miniscule."

This is a really poor argument, and slightly disingenuous. Sometimes, people change their use for a device. Maybe they want to explore linux in the future, maybe they want to sell the laptop to someone who wants to use it for linux...

That the blame is being possibly misdirected ought not to detract from the fact that blame is necessary. If users don't vocally oppose measures like this, the industry will assume that this kind of restriction is reasonable. It's not. Yes, power management is important, but anyone who puts linux on their laptop will quickly learn there are limitations to the features of that device that were originally tailored to the OS the device shipped with. That's a good lesson, and a good opportunity for a community to develop around the device (if it's good enough) to mitigate those deficiencies and adapt them for the particular linux distro.

In short, Lenovo is at fault for not being up front about this limitation, for not explaining it, and for not devoting at least some resources to mitigating for their potential linux-inclined users.

Then again, perhaps a linux-inclined user might also be one of the many that don't trust Lenovo after their self-signed certificate scandal.

Please do not look upon popular economics best sellers as a good way to get a rounded economics education. While many have value in critical insight and entertainment, they often offer only a narrow perspective on economics. Novice economists typically lack the ability to critically appraise them without a wider economic framework to work from.

An academic reading list (i.e. university course texts) will provide you a good theoretical foundation as to how economists interpret and model real economic issues. It's important to grasp the plethora of important economic concepts like diminishing returns, comparative advantage and concepts of market efficiency (among many others things) and how they apply within micro or macro economic issues.

With some foundational knowledge in place, a good economist then goes on to relax the underlying assumptions and look for analogues in the real world. This is where the popular reading list come in, often they take a deep dive in specific areas i.e. where traditional economic assumptions break down.

In short, the academic reading list gives you a framework to understand economics. The best seller list tempers that framework with real world exceptions, paradoxes and open questions.

It's a bit disappointing to see a real academic reading list so far down this comment page (I strongly recommend looking at oli5679 suggestions). I doubt HNers would suggest reading up on javascript as a good foundation for a computer science education. Yes, you can become a well rounded computer scientist by starting on javascript. But it's more important to have a grasp on core computer science ideas like algorithm design & analysis and automata.

One approach is to go to the MIT OpenCourseWare website, look for the economics department, and look at their reading lists.

Of course, that's going to be mostly academic reading (textbooks, etc.). But if you want to learn the basics, it's probably safer to start there than the pop econ books (and I would dispense with most heterodox reading before you're able to assess them within a larger framework).

Two good books that haven't been mentioned here:

Economic Theory in Retrospect, by Mark Blaug. Very useful to get a good historical grounding in the main ideas that compose today's orthodox economics.

The Applied Theory of Pirce, by McCloskey. Your usual microeconomics textbook, but far more thorough, insisting a lot on grasping the intuition behind the concepts. Available for free from the author's website here: http://www.deirdremccloskey.com/docs/price.pdf

Top of my list would be "the ascent of money", by Harvard Prof Niall Ferguson. It explains what money and financial instruments are, by telling the stories of their history. Hes a great story teller, and for each aspect of finance that he explains, there's a story of a famous piece of history which it caused. For example, the application of oriental maths to finance caused a huge boom for Italian bankers, especially including one family, the Medici. That financial boom was responsible for the artistic boom we call Renaissance art. Or how the Dutch republic triumphed over the enormous Hapsburg empire, because the world's largest silver mine couldn't compete with the world's first stock market.

It's extremely readable and funny and covers most of the situations in real life where you can apply economic concepts to understand why something is the way it is.

Understanding why countries and economies grow (and why some grow faster than others!) doesn't always fall under the "economics" umbrella but is really useful for informing policy (and a useful reminder these days, when both US presidential candidates rail against trade agreements). "From Poverty to Prosperity" lays out a very readable and convincing argument for how countries have grown and become rich. https://www.amazon.com/Poverty-Prosperity-Intangible-Liabili...

The following list will introduce you to Western Economic Philosophy as it relates to modern history specifically. This list is weighted heavily toward neo-classical economics and does not get into computational model based economics - specifically microeconomics, which comprises the bulk of economics education today:

Schumpeter - History of economic analysis

Adam Smith - Theory of Moral Sentiments

Kaynes - The General Theory of Employment, Interest and Money

Marx - Capital

Benjamin Graham - The Intelligent Investor

Galbraith - The Affluent Society

Galbraith - The Great Crash

Milton Friedman - Capitalism and Freedom

Nassim Taleb - Black Swan

Ron Suskind - Confidence Men

Scott Patterson - Dark Pools

If you want to delve into heterodox economics afterward, start with the following:

I work in a quant hedge fund - I'll give you my take. The first thing I would point out is that there is a massive difference between academic theory and practice. I don't want to turn this into an anti-academic rant, but I do want to emphasise that we value very different things. For this reason alone, most of what you read in most textbooks won't do you much good.

Personally I wouldn't place too much emphasis on outside knowledge. Basic knowledge of economics wouldn't hurt, but don't go nuts. Khan academy will give you more than enough theory. You don't want to spend all your energy developing a skill that a trained economist applicant will crush you at. Neither should you focus too much on e.g. stochastic analysis. In the real world, no-one cares whether a stochastic process is previsible or progressively measurable. But knowing how to derive Black-Scholes couldn't hurt.

So far I've msotly talked about what you shouldn't read. I'll try to talk a little bit about what you should. Read the financial press. The FT or the wall street journal, depending on where you're based. Read finance blogs. Frances coppola is good. So is the Bank of England's blog. Check out Alphaville at the FT too. You'll be expected to know what's going on in the world right now. Could you explain what QE is? For a finance job, that's more important than knowing what the IS/LM model says. What's been going on in China recently? What do you think about their currency outflows?

Know how to code. At least one of Python, Matlab or R for the buy side, one of Java or C++ for the sell side.

Most importantly, though, you should be able to demonstrate enthusiasm. Any given junior quant role will get hundreds of applications, and some demonstrable interest will put you head and shoulders above the pack. A link to some decent analysis on github would do (none of the hundred or so applicants to the last position we advertised did that). Play with some financial data. Quantopian is apparently a good resource.

I've talked about how to prepare for a general finance job. The specific reading you should do will depend on exactly what job you want. Do you want to be a quant? If so, buy side or sell side? Read up on the difference. Go check out efinancialcareers, have a look at the skills they're asking for within each sector, and take it from there.

I read Dubner and Levitt's Freakonomics in 2005. It's lame to say that a pop-science book changed my life, but since then I've thought about economics every day.

I would recommend some pop-econ to become familiar with a stylized version of how economists think. I'd recommend Tim Harford's The Undercover Economist Strikes Back and The Logic of Life and Robert Frank's The Economic Naturalist. (Dubner's and Levitt's books are entertaining, but I wouldn't try to learn much about economics from them)

The world of professional economists has been fascinating to watch over the last 10 years, as academic economist blogs are very active and very high quality. Watching debates and commentary about the global financial crises unfold on the blogs in real time was really something. Economist bloggers have a real influence on policy now, and whole schools of thought have coalesced out of blogs (e.g. market monetarism).

There are some excellent economics podcasts out there now. EconTalk (with Russ Roberts) has been going since 2006. I'd recommend listening to some of his interviews with academic economists. Macro Musings (with David Beckworth) just started this year, and the policy discussions have been quite informative.

The Marginal Revolution University website has an fantastic series of videos on economics topics. The "Development Economics" course I would recommend strongly - I wish I'd been taught the Solow Model in school.

Economics is a very interesting discipline to study from the outside. Learning a bit about it puts policy debates in a new light - I've become much more liberal on some topics and much less confident on a lot of topics. I find that reporting about economics issues is generally pretty terrible, so beware that if you get into economics you'll want to stop reading a lot of news analysis.

This one was recommended by the former head of NYMEX to me when I started my career in trading. Written about Jesse Livermore who made and lost his fortune multiple times. He was often blamed for rigging the market, but his lesson is simple: you basically can't rig the market; it will destroy you way more easily. Take what the market gives you and be happy it even decided to give you that:

And you'll see a lot of recommendations for everything from Hazlitt to Piketty, but my favorite you never see recommended for macro is The Way the World Works by Jude Wanniski. He was one of the life long Democrats who became a Reagan advisor (and basically quickly turned back into a Dem before passing away about ten years ago):

It is not only a fantastic high level view, but it get granular enough to explain things like how US Treasuries prices quoted in 32nds of a dollar or how fixed income securities are identified by something called a CUSIP or what a strike price is for an option. Granular enough to explain practical day to day concepts that would help you at your first job in a financial firm.

IMO start with a recent book that spells out useful pointers to give the classics a critical read:

"Debunking Economics", by Steve Keen.

Keen gave a talk at Google a few years back that was a pretty good summary of what's in the book's first version.

If you're into stats and finance also check out the author's finance classes on youtube. Besides a bunch of videos that cover what's in his book, there are quite a few on financial modeling, and at least one video in there that delves into power laws and financial markets.

Also, try to throw in a few history books to your mix: history of the world, of science, and of ideas. History helps contextualize and make sense of what was going on in the mind of contemporaries as economic theories matured.

* John Locke's Two Treatises of Government - It's political philosophy but it's hard to understand Classical Liberalism without having read some Locke.

* Adam Smith's Wealth of Nations - He and Locke are the two main guys to read for a solid start on Classical Liberalism, which is completely different than modern political liberalism. It's like having two features in an app with nearly the same name. Confusing as fuck.

* E. F. Schumacher's Small Is Beautiful: Economics as if People Mattered - This book will shift your perspective, useful to avoid becoming an a mindless advocate for one school of thought or another.

* Marx is a tough one as Capital is massive and unreadable and The Communist Manifesto is a propaganda pamphlet but I think you need to at least find some articles that summarize the basics.

* Milton Friedman's - Yes, read Capitalism and Freedom. I hesitated to include it as the guy's so good at making the case that it can turn you into a market advocate bot. Please resist that.

Can someone help me on this, is there a book balance Hayek and a book to balance Friedman? I'm sorry but Keynes doesn't do it for me. Look at the difference in titles between Hayek and Keynes. It's hard to motivate to read the Keynes book but nobody ever has trouble reading Hayek.

I see a lot of these ideas come up on HN a lot. What I don't like so much is when someone becomes an advocate for a particular ism. To me, all isms are rubbish. All of them. Understand but do not become a shill for an ideology.

I found Larry Harris' Trading and Exchanges: Market Microstructure for Practitioners a solid introduction to market making and trading. Terms and concepts are easy to pick up from the text. I was comfortable enough after reading it to skim stats journal papers talking about market making models. The Stockfighter team had mentioned it in older threads here. It's expensive, but I just borrowed it from the library at my university instead of buying.

My first read of your request made me think you were looking for books mainly for personal intellectual growth. There are a lot of answers in that vein, as well as a few that seem suitable replacements for an undergrad econ degree. A second read made me wonder if you're actually asking for practical advice about what you should read in order to get a job in finance, given you won't take many econ or finance courses. I'll answer in the second vein, as it seems to be somewhat underrepresented.

There should probably also be a category for what I think of as quantitative fundamental investing. For an idea of what I mean, look at what the investment firm AQR does. I'm not sure of good books in this area though.

"Economics in one lesson" is a classic worth reading and thinking about. While you don't necessary have to follow the libertarian way of thinking it guides you to, it still shapes your critical thinking about economic policies a lot.

To better understand our monetary system I highly recommend watching the "Money as Debt" movie. It's on youtube as well as http://www.moneyasdebt.net/ (which I think links to y/t anyway). It provides a pretty good explanation of gold-backed vs credit-backed money and is fun to watch.

Perhaps the greatest merit of Frieden's book is that it allows the reader to see the themes of winners and losers, risk and uncertainty, integration, economic growth and technological change emerge clearly from the deep forest of contemporary history. One gains a greater appreciation for the timelessness of these phenomena and how to begin to get a grip on the bigger picture of policy making and the global economy.

If you know Chinese, there is a must read:Economic Explanation( by Steven Cheung.

If you don't, you can read:Economic Explanation: Selected Papers of Steven N.s. Cheung. (Same book name but different content - collection of essays vs a book on theories)

Why Steven Cheung?As a close friend to Ronald Coase, he too focuses on empirical research (the real world) rather than blackboard economics (the imaginary world); hates the use of math for the sake of it; emphasizes on testable implications (positive economics).

His classic paper The Fable of the Bees is a great example of how empirical work destroys blackboard economics.

"Global Capitalism: Its Fall and Rise in the Twentieth Century" by Jeffry Frieden is a masterpiece. It will give you a thorough, expansive view of the global financial world - the major events and trends - as they unfolded over the last century. This book is regularly assigned as a text book in Ivy League economic history classes, so even though it's short on math/ econometrics, it's a serious work.

I work in finance. I agree with other comments that suggested the CFA Program curriculum.

Specifically, CFA Level 1 textbooks are among the best introductions to finance and economics I found. You don't have to sign up for the CFA exam$, the textbooks can be bought separately. CFA might not be as fun reading but are a very practical foundation (and will help put future readings in context).

You say you hope to get into finance but don't know almost anything about it. How did you decide to get into finance without knowing much about it?

I enjoy it but it's not for everyone. Finance is also huge. Economics is less relevant to finance than many realize (most roles do not require having studied econ and Goldman's CEO recently called the firm a "tech company").

May I humbly suggest, prior (or in addition) to spending precious time reading finance/econ books, speak to a few people who work in finance and read finance sites to get a better feel for it.

Books can be amazing, even if just read for intellectual curiosity, but they take a long time to read. There are other ways to learn which are quicker/more relevant to you vs. entire books.

Lastly, one "must-read" book is The Intelligent Investor by Ben Graham. The revised edition with notes from Jason Zweig is excellent. The industry is still obsessed with the book ~70 years after it came out and for good reason. Even if you disagree with it or think it's outdated (and many do), the book comes up so often it's worth reading to be in the loop.

I will resist the urge to tell you what NOT to read and merely recommend a few favorites:

1. I am a big fan of John Kenneth Galbraith, who writes very clearly about a few things. I recommend both "The New Industrial State" and especially "The Affluent Society", where he argues that economics is insufficient to deal with post-scarcity.

2. Deirdre McCloskey's "If You're So Smart" is a great skewering of the blinkered nature of economic inquiry. Much of what is wrong with economics is what is wrong with scientific inquiry generally (being stuck in a formalism, confusing their models with reality); this is an excellent criticism.

3. Anything by Ha-Joon Chang. He writes intelligently about development and globalization; he is unorthodox in his economic practice, and his arguments are simple and drawn from history. There are a lot of "My god, it's full of stars!" moments in his work.

Everyone seems to be addressing the finance part of it without the "growing intellectually" part of it. I've been fortunate to be surrounded by economists my whole life. Economists are also tremendous historians; reading a lot of history and recasting what you know about history into economic frameworks will greatly sharpen your intellectual abilities. As with most things involving learning, having and seeking out intellectual peers is a valuable way to challenge all your ideas.

It would be easy to give it the traditional libertarian gloss of "reducing regulation to improve the economy", but it's much more subtle than that. It looks at the costs of being outside the "system", and the benefits of simplifying the system so as to include more people and businesses. Along with land reform to reflect the actual reality of buildings.

People have mentioned different authors across different schools of economic thoughts such as Mankiw, Rothbard, Friedman, Hayek, Smith, Keynes, etc. There's one that's also being mentioned which I particularly would avoid recommending which is Piketty.

Those are the best recommendations.

I would like to give a recommendation that might be a little bit different: 'Why Nations Fail' by Acemoglu.

for the undergraduate background and then at the graduate level Jehle and Renyi for microeconomics, Duffie for asset pricing theory, Tirole for corporate finance and Campbell, Lo and Macinlay for econometrics.

There is no must-read books for economics (or almost any other field of study). Non-fiction economics books are meant to teach the reader something new. As Economics represents a set of ideas owned by no one individual, the best overview of economics will contain all of the important, integral ideas of the subject.

Any summary of economics that introduces the core concepts will be great and serve its purpose.

One I don't see recommended very often is: Fortune's Formula. It describes the lives of Claude Shannon and Ed Thorp (author of Beat the Dealer) and how they use the Kelley formula in both gambling and investing. The Kelley formula, as the book explains, is a formula for determining the optimum amount to bet on a wager (or investment) if you know the edge you have over the house.

As far as economics is concerned, I recommend Mankiw's principle of economics. It's widely used as a textbook for economics undergraduates. It's very well written and entertaining. In my opinion, it's better than general public vulgarization books.

David Ruppert's "Statistics and Finance" is the classic you are looking for. It is a standard textbook in most finance curriculums in the US. Roughly 50% of the book is plain statistics as applicable to finance. The rest is finance with a statistical flavor.

Alternatively, don't learn from books, but the markets themselves. Open a Paper Trading account via Think or Swim. Begin a steady diet of Bloomberg / WSJ / CNBC every day. Whenever a word or idea is mentioned that you don't understand, Google it or consult Investopedia. Figure out what the Fed actually does. How debt and credit markets work. The microstructure of physical and electronic commodities trading. Maybe skim an online "Stochastic Calculus" class. Join Quantopian and master every algorithmic strategy known to humankind. Dive deep into cryptocurrency and blockchain technologies.

And who knows, perhaps one day you'll invent something that obviates the need for a global system of monetary trust ;)

at the moment i'm uninterested in the arc of economic studies in acadamia so the opencourseware reading lists seem the wrong place to start for me

can anyone suggest reading to understand how contemporary banks function, where can i get an understanding of a bank or credit union from a software engineer's perspective: dependencies, steps to start, challenges of running, protections from common problems, interesting emerging disruptions;

Derivatives Markets by Robert McDonald is a great textbook. I would not suggest reading it cover to cover, but it's a great reference for truly understanding bonds, options, etc.

I'd also recommend anything by Matt Taibbi, but only if reading about the shadiness of Wall Street interests you. His books are well written and fact checked, but definitely have a bias that you may not care for.

There are some really good suggestions here about the economics and finance in general. I think having a good solid understanding of the financial crisis is valuable in today's world. I recommend "All the Devils are Here" by Bethany McLean. It offers a well rounded, facts-first approach to explaining the crisis. It does not point fingers or assess blaim, which is a valuable perspective.

"The Great Transformation" by Karl Polanyi. It's a tough read, because it was translated from Hungarian. It's an important read, because it provides an alternative analysis to both Smith and Marx. Polanyi was informed by recent developments in anthropology which contradicted the major theories of how modern economies had formed.

Understanding Wall Street by Jeffrey Little gives a good overview of many kinds of financial instruments, including stocks, bonds, and options. It's NOT an investment flavor of the week book and is now on its 5th edition, the first having come out over 30 years ago.

Thinking, Fast and Slow by Daniel Kahneman.He won a Nobel Prize in Economics in 2002 for his work in Behavioral Economics. I truly believe understanding human behavior and decision making is a key foundation to anything else you read in Economics.

I was going to recommend Malkiel's book but of course it has been already mentioned several times. So I'll add to the list Zweig's "The Devil's Financial Dictionary" (funny but also educational) and Sharpe's "Investors and Markets" (more academic).

Niederhoffer did his PhD in statistics. He is nuts but he basically invented quantitative trading. Maybe you read his book "Education of a trader" and the "New Yorker" article about him ("The Blow up artist").

Preface: For a bit of I suppose... uhh, qualification, I took nearly every single upper division Economics class my university offered (~25). I did so because I LOVE Econ. Also, sorry for the rambling nature of this.

First things first, finance is only sort of economics, it's really just finance. I'd highly recommend taking an accounting class (or book) and a grab an intro finance book. Accounting will really help with jargon, and just some really basic things (like balance sheets). Also, "Security Analysis" [0] is the "only" book you'll ever need, Warren Buffet recommended it to Bill Gates, and now Bill Gates recommends it to everyone.

Back to Economics... There are two primary "groups" of thought... sort of like twins separated at birth who grow to hate each other.

Focuses primarily on microeconomics and largely mathematical. It's birth is largely due to Economists wanting to make econ a "true science" like we see the physical sciences (biology, chemistry, physics). It starts around the late 1800s and really picks up steam around the time of Einstein. Math was hot and being applied everywhere.

A really interesting period to research and study is right after black Tuesday (and before the great depression) and what the central bank didn't do (before central bank intervention in markets). While I really detest the bastard, Milton Friedman's work on monetary policy is pretty science and generally good here. [1],[2].

I'm a Keynesian (I suppose-- Econ gets deep fast), and so you'd be no where without reading some of what Keynes did to get our assess out of the great depression (i.e. government spending). It's also more or less the birth of Macroeconomics... You'll know you're good when you laugh at forgetting: Y = C + I + G + (X - M). Some good things to get started are looking at the IS-LM [3] model and AS-AD [4] model.

That gets you into the 60s - 70s. Tall Paul Volker is the unsung hero of the 80s, read about him (he ran the federal reserve). After that microeconomics starts to fragment into things involving game theory and behavioral economics (Daniel Kahneman is the man).

Econometric analysis mathematically speaking is just multivariate regression analysis for time series or cross-sectional data. More "modern" analysis is probably using panel data [5] (combination of cross sectional and time series). Calculus, linear algebra, and differential equations should prepare one plenty for everything but panel data analysis. The real "econ" part is applying solid econ theory to the mathematics you're using, a textbook will help [6]. For finance this is your bread and butter.

Game theory will apply a lot of different mathematical tools. You will need to love pure math. To really get into it requires pain or love. I like a healthy amount of both.

So as it turns out, neoclassical economics is at most half of Economics. It's really where the "philosophy" comes into play. You're gonna need a quick history lesson to sort of see it's topic matter. Economics really didn't exist before... the 1500s. You can try to apply economics to earlier times but you could also just make shit up and post it to twitter. Both would be equally likely to contain truth.

Economics came into existence around the time the Dutch began developing trade routes (1550s). A by product of all this trade, is tons of cash, and goods-- currency (silver, metals, whatever) starts to actually be used in society (before that it was mostly just a status symbol). It pisses off a lot of _institutions_, most of all "the church" and monarchies because money is allowing people to gain power. It's usurping power from them. This is the rise of the "merchant class" and now thanks to money (trade really, but whatever it's complicated)-- people are liberating themselves from the social status they're born into. Eventually modern republics appear, and governments form. Nations trading globally becomes more common (Dutch, English, Spanish) and we get to Adam Smith, David Ricardo [7], et. al.

Now it's the 1800s. People are seeing the birth and growth of capitalism, industry, corporations, and the tumultuous death of agrarian life. Now the way the "common person" lives their day dramatically changing, for a few it was better for most it was worse. Some economists begin to ask why are we replacing these now defunct _institutions_ with equally shitty, or possibly shittier, ones. This is more or less becomes the birth of heterodox economics which largely studies the more abstract ideas like "institutions"; by it's very nature the content tends to be philosophical.

By the 1920s heterodox economics is falling by the wayside. The content is less able to be tested like a physical science (i.e. no math/stats); so, it's treated like a misbegotten child... By the 1950s heterodox content was marginal at best-- the cold war and fear of communism made (makes) people insane. Economists pretty much had to be pro-capitalism or face being called "commies" and thrown in jail or worse being a narc in a witch hunt. This was more or less the nail in the coffin in mainstream heterodox economics (at least for research in the Occident). After the cold-war ended the nail got pulled out, but I wouldn't say it's really outta the coffin yet.

This book [8] isn't great but it's quickly digestible and will point you in the appropriate directions.

--------------------------------------------------------------------

Some Rambling to Finish

I'd highly recommend not just learning how to use the tools, but why we have them and where they came from. Economics is vastly deeper than the average person will ever know. That depth is greatly empowering and guiding when using its lenses to see and solve problems. One last thing, know there's no going back, you will see the world differently.

Just a nit, but the author keeps talking about object recognition while what he was actually doing is image classification. Object recognition actually consists of two tasks, one is classifying the object (this is a beer bottle) and the other is also says where in the image the object is. Additionally it can/should detect multiple objects in the image. This is a more complex than classification, which only associates one category with the image.

Dis the author publish a repo for this? It's easy getting tensorflow going for basic image classification but the hard part is actually making the robot move in a way that makes sense - using the camera and the sonar data to make decisions and then drive the motors. Or is this not autonomous?

-- How do you rank yourself among writers (living) and of the immediate past? -- I often think there should exist a special typographical sign for a smile -- some sort of concave mark, a supine round bracket, which I would now like to trace in reply to your question.

It makes me a bit of a luddite (and a heck of a curmudgeon), but it always makes me a little sad when good ol' ASCII smileys are rendered all fancy-like. There's something charming and hackerish about showing it as a 7-bit glyph.

I think that the article does a fairly convincing job of showing that this is just weird 17th century typography, but then again, there was enough experimentation with printing at the time that it also wouldn't surprise me if it was intentional, at least at some point in the typesetting process.

Now adays, if a thread came about to propose the ':-)', people would devolve into a debate about the proper use of the parenthesis, and at least one user would claim that '(-:' was a better choice, though it is the darkhorse option for the community.

A little over decade ago, when Norway's fund was called "the Petroleum Fund" and had "only" $147B, an article in Slate magazine explained what was special about it:

"Norway has pursued a classically Scandinavian solution. It has viewed oil revenues as a temporary, collectively owned windfall that, instead of spurring consumption today, can be used to insulate the country from the storms of the global economy and provide a thick, goose-down cushion for the distant day when the oil wells run dry."[1]

We have been taking a small cut of the hundreds of thousands of barrels of oil we have been producing daily for the past 100+ years and spending it as fast as we possibly can.

>Most of the oil companies exploring for oil in Alberta were of U.S. origin, and at its peak in 1973, over 78 per cent of Canadian oil and gas production was under foreign ownership and over 90 per cent of oil and gas production companies were under foreign control, mostly American. [0]

Visiting Norway, I always thought it is kind of a weird country. On one hand it's one of the richest countries in the world. On the other hand, I've seen so many young Norwegian women work hard cleaning toilets and hotel rooms. Such jobs would be considered "low rung" at in the US but in Norway they treat their low rung jobs as something to be proud of.

I see lots of comments talking about the return on investment (~4% YoY) and the ~$60M in bonuses, etc. But I don't see anyone questioning why there is so much money invested in other companies outside Norway.

I'm curious to know: 1) Why do we have a savings Fund with double of the annual GDP? Should we have a limit? Why the excess is not invested locally? 2) Is there an existing plan to define when the money will be directed to the Norway economy? The current GDP per capita is around $68K which doesn't seem that much compared to the amount of money in the country's saving account. Why not invest in education and/or technology?3) Why there are a few people earning so much money (e.g. ~$60M bonus) to manage the country's assets? Is the real purpose to make money or save the money for future generations?

This is a great idea. Waste the spammer's time and it's no longer worth it.

The phone version of this is Lenny[0], a set of audio files/Asterisk script which pretends to be a senile, doddering old man (who has a duck problem). There's a reddit user who runs a number you can forward your sales calls to, and he'll pick out the best ones and put on YouTube[1]. The record is keeping a caller on the phone for 56 minutes.

> Imagine if this type of thing happened in real-life. You walk out the door in the morning and youre immediately attacked by Parul, Kevin, and Amelie.

I laughed out loud at this, because it's exactly what I'm experiencing now in West Africa.

Street vendors are aggressive about selling whatever they have, and they seem to assume I want it - almost like I owe it to them to buy it - I'm not sure if it's because I'm White, or it's just their standard procedure for everyone that walks by.

On my 3 minute walk to the local store, I get a minimum of 10 people in my face, trying to sell me cell phone recharge cards, peanuts and limes. Every single day I say no thanks, every single day they try again, sometimes even on the walk back.

I've tried ignoring them or not responding at all, and that usually makes it worse - they'll yell louder and louder (assuming I have not heard), hiss, make a kissing noise, and eventually put themselves in my way so I'm forced to acknowledge them.

Amazingly, even when I do buy something, and I clearly have it in my hand (a bunch of carrots for example), every single street vendor selling carrots will still try with 100% effort to sell me carrots.

Back in the olden days, when the ping of death causing a windows BSOD was a thing, if I was online when I got spam I would immediately look for the spammer's ip and send them a ping of death. I could tell it often worked because then I'd get the same spam again 10 minutes later, so I'd do it to them again, then I'd get spammed again and ping them again until eventually they gave up.

I assume their mass mailing program would just start at the top of an email list and send them one by one, without tracking progress, so when the computer crashed they would have to start over. After a few crashes in a row hopefully the spammer would blame the spam sending program for crashing the computer and give up, maybe even demand a refund from whoever sold it to them.

I was just wondering: if every person did this with the spam they get (or maybe automatized by Gmail), spammers would be overflowed with bot answers to their spam emails, and would not be able to differentiate between a potential victim's response, and all the bot replys. This has the potential to actually SOLVE the problem of spam. Think this could work?

This reminds me of a script I wrote about a decade ago to deal with phishing sites. My script generated first and last names, email addresses, passwords, and credit card numbers that actual passed checksum validation. It would submit these fake entries to a phishing form just as fast as the remote end would take them, polluting their database/inbox/whatever with thousands of bogus submissions. Besides wasting their time and resources, it also smokescreened any legitimate submissions that might have come through.

Effing hilarious. Some years ago I spent a few days writing to a 'Russian bride'. It became instantly clear all replies were scripted, there was no connection at all with what I said (The full text of 'I, Robot'? Oh, what interesting things you do). So I'd say many if not most of the spam scenarios are automated and the whole thing becomes too meta.

I'd love to see it have random answers that are unique based on the question. Then you make it a global service that hundreds of thousands of people can forward messages to, and then you waste spammers time en masse.

There is kind of a interesting Turing test scenario for AI here. Design an AI to maximize number of replies (or total text written) by the spammer. The internet is vast and full of spammers, you'll never run out of real humans providing responses to optimize your system.

I once made the mistake of sending a joke reply to a spammer from my legitimate email.

Turned out they pulled my phone number from the WHOIS info on my domain which I can only assume they sold to some marketing companies as I received about a dozen cold calls from various "web agencies" from the states. A lot of them were relentless, calling me repeatedly and leaving voicemails.

But I disagree with the idea that inboxes are sacred, and disagree with the attitude of "how dare people send marketing to me!" Fraudulent spam is one thing. Plain old marketing or sales cold calls, though... you know people are going to do it. It is their job. And I'd much rather get emails than I can quickly delete and ignore vs. phone calls. And once in a while, someone actually hits on a service that is useful to me.

So I don't think the real-life scenario of people badgering you outside the door is accurate. The better metaphor would be one comparing your inbox to your actual mailbox. Sure, junk mail is annoying and most of it gets thrown out. But sometimes that restaurant down the street does send coupons.

The one for phones has been on HN before. This one for spam is nice, but not yet smart enough. With more smarts and some understanding of the messages, it could keep spammers going forever. It doesn't need to be very intelligent; it just needs to get up to the Eliza level.

If it detects a spam related to search engine optimization, it should have a list of about a hundred plausible questions it can ask on that subject, for example. There aren't that many spammed subjects.

Most email spam, though, is promoting a link, and can't handle an email reply. You'd need something smart enough to go to a web site and sign up with fake credentials.

>The N64 hardware has something called a Z-Buffer, and thanks to that, we were able to design the terrain and visuals however we wanted.

This was a huge advantage for them. In contrast, for Crash Bandicoot -- which came out for the PS1 at the same time -- we had to use over an hour of pre-computation distributed across a dozen SGI workstations for each level to get a high poly count on hardware lacking a Z-buffer.

A Z-buffer is critical, because sorting polygons is O(n^2), not O(n lg n). This is because cyclic overlap breaks the transitive property required for an N lg N sorting algorithm.

The PS2 got Sony to parity; at that point both Nintendo and Sony had shipped hardware with Z-buffers.

I'll always remember the first time I saw Super Mario 64 in front of my very eyes in ToysRUS. It was as if every other 3D game in history suddenly didn't matter anymore. Here was the future of 3D gaming. Here was a game with unbelievably fluid controls in really large levels clearly designed to be explored.

Unlike most previous Mario games, there was no timer either. This only further encouraged players to really explore the 3D environment, collect the side-quest coins, and not be stressed out.

The opening quote totally blows my mind as a humble non-game dev. I never thought of it this way.

> Miyamoto: Ever since Donkey Kong, its been our thinking that for a game to sell, it has to excite the people who are watching the playerit has to make you want to say, hey, gimme the controller next! ...

The simple approach is to say, "to make a great game, it should be fun for the person playing it." But they've already taken a step back and approached it from the perspective that great gaming happens socially. Maybe this is one reason I cherished all the Nintendo games as much as I did. It's because the memories of playing them are always with other people and we're all having fun. It wasn't a solo act.

I should probably make an effort to finish that game someday. Not that I've finished many Super Mario games. I think I've purchased every one, but have completed maybe two of them. So many levels incomplete... I wonder if game devs feel bad working on higher levels, knowing only a tiny portion of players will actually ever see them?

I have fond memories of this game, and a lot of what they spoke about in the interview regarding what gamers enjoyed ringed true for me. The movement of Mario did feel great, and I had a lot of fun exploring the environment, jumping in the water for swimming or seeing how Mario's movement was different in different environments. (I did notice his centre of gravity as well, and it seems like a great fit).

It is great to read that they actually had players like me in mind when they created the game. This article actually makes me want to dig up the game and play it through again.

I would love to see something similar for Goldeneye/Perfect Dark. I've been slowly but surely working on building a demo FPS engine using very minimalist implementation to learn about game dynamics, and I'd love to hear what sort of technical challenges were faced at Rare and how they developed their(albeit simplistic) enemy AIs with pathfinding.

I think the first time I played this game was with the Nemu64 emulator using a good computer and LCD monitor. The monitor alone made for a better experience than the scaly TV sets typical of the day. Also, being able to pause, save, and replay an area was nice.

The original point of ISOs was to offer to employees the opportunity to take an economic risk with stock options (by exercising and paying for the stock at the bargain price) while avoiding the tax risk (by generally not recognizing ordinary income from that exercise and being taxed only at the time the stock was sold, and then only as a capital gains tax).

AMT has since emerged to devour the value of this benefit. By having to include the value of the spread (difference between exercise price and fair market value of the stock on date of exercise) as AMT income and pay tax on it at 28%-type rates, an employee can incur great tax risk in exercising options - especially for a venture that is in advanced rounds of funding but for which there is still no public market for trading of the shares. Even secondary markets for closely held stock are much restricted given the restrictions on transfer routinely written into the stock option documentation these days.

So why not just pass a law saying that the value of the spread is exempt from AMT? Of course, that would do exactly what is needed.

The problem is that AMT, which began in the late 60s as a "millionaire's tax", has since grown to be an integral part of how the federal government finances its affairs and is thus, in its perverse sort of way, a sacred cow untouchable without seriously disturbing the current political balance that is extant today.

And so this half-measure that helps a bit, not by eliminating the tax risk but only by deferring it and also for only some but not all potentially affected employees.

So, if you incur a several hundred thousand dollar tax hit because you choose to exercise your options under this measure, and then your venture goes bust for some reason, it appears you still will have to pay the tax down the road - thus, tax disasters are still possible with this measure. Of course, in optimum cases (and likely even in most cases), employees can benefit from this measure because they don't have to pay tax up front but only after enough time lapses by which they can realize the economic value of the stock.

This "tax breather" is a positive step and will make this helpful for a great many people. Not a complete answer but perhaps the best the politicians can do in today's political climate. It would be good if it passes.

> Only startups offering stock options to at least 80 percent of their workforce would be eligible for tax deferrals, and a companys highest-paid executives would not be able to defer taxes on their stock under the legislation.

I understand the desire to avoid a regressive taxation system, but why is it that every tax rule we create comes with 2x the amount of caveats and rules? Our tax system is becoming a mess.

At this rate soon nobody will be able to file their own taxes without an accountant to sort through the muck. And complicated to systems tend to benefit the wealthy.

It's quite common to owe taxes today for gains on the value of your stock -- which is an illiquid asset you can't sell. This puts employees in the position of shelling out cash to keep something that rightfully belongs to them, or simply abandoning it (failing to exercise) when they leave the company. This bill would defer taxes on gains up to 7 years, or until the company goes public.

If you are awarded stock options, an you exercise them, you have to file an 83(b) election within 90 days or else you are liable on all paper gains in the value of your stock.

Even if you file an 83b election, you are still liable for paper gains between the value of your options when you were granted them and the value when you exercised.

For example, if you were awarded options with a strike price of $5 and the company raised a new round of funding and the 409A valuation (& strike price of the new options) has risen to $15 per share, the IRS considers that you now owe taxes on $10 of income / share. In other words, it costs you not $5 / share to exercise but ~$8.50 including taxes.

So the tricky part about options is that they require money to exercise, money that you often don't have ready, in order to obtain an asset that is (a) not liquid and (b) may decline in value (c) you often can't sell due to transfer restrictions.

For example: one early engineer at Zenefits had to pay $100,000 in taxes for exercising his stock....and then all the crap hit the fan, and he likely paid more in taxes than his shares will end up being worth. Ouch.

As a result of this problem with options, many startups -- especially later-stage ones like Uber -- choose instead to offer RSUs, which are basically stock grants as opposed to stock options. You don't have to pay any money to "get" them like you do for options.

However, the IRS considers stock grants, unlike options, immediately taxable income. If you get 10,000 RSUs per year, and the stock is valued at $5/share by an auditor, you now have to pay taxes on $50,000 of additional income, for an asset that you likely have no way of selling.

Some startups allow "net" grants -- which basically means they keep ~35% of your stock in lieu of taxes. That solves the liquidity problem, but offering this is completely at the discretion of the startup and some don't, which leaves employees at the mercy of the IRS, again having to pay cash on paper gains of an illiquid asset.

This sounds great, though requiring "offering 80% of the workforce stock" and excluding highest paid executives seems vague - is this at time of hiring, when stock is issued, fully vested, when taxes are due, somewhere inbetween? I parted ways with a startup in the valley last year and exercised some shares on January 13th. If I had exercised just two weeks earlier, I'm told I would've been hit with north of 50k in AMT, I have until next year to figure it out now but I wonder if I'm eligible. Also curious how long it typically takes to get from house, through the senate and passed.

I still don't understand why taxes are owed. If an option at the time of grant is worth $0 (which is how it's typically done or is that not the case?), then you don't owe anything to the IRS until you exercise the option, i.e. buy shares at the option price and sell them at presumably higher valuation and make some money, at which point you will need to part with some of it because it's income.

But if you never exercise the options, then you never owe any tax. What am I missing here?

More evidence as to why the income tax should be replaced with a consumption tax. Just let people make their dammed money already and apply a simple tax when they spend it. Windfalls wouldn't be "dangerous" or punitive in that model, and savers would be rewarded.

--Of course I oversimplify the consumption tax, and safeguard would need to be in place on that to ensure it is not regressive with respect to necessities...

Perhaps I'm misreading the law, but it looks like it solves the wrong problem: It addresses a cash-flow issue rather than the tax liability issue.

Say you have options at FooCorp and you leave. FooCorp is illquid and you have 90 days to exercise your 10,000 options. Your FC options have a $5 strike, but the company currently has a 409a valuation of $100/share.

To exercise the options you would need to pay $50,000 to FooCorp, then you would have a "realized gain" 950k (($100-$5)*10000 which you would owe 28% of in taxes that year, or 266k. So you would need access to $316k in total in order to exercise these options.

Two issues arise: (1) You may not have $316k just kicking around. (2) THE SHARES ARE ILLIQUID AND MAY BE WORTH $0 WHEN YOU CAN ACTUALLY DO ANYTHING WITH THEM.

The bill appears to help with (1) by letting you pay that 266k not now-- but later when the company shares become liquid or 7 years (whichever comes first). But it does nothing about (2) -- you might exercise and then the company goes bust, and seven years later you owe $266k and your current position is worth -50k... and because the taxes are AMT, you can't meaningfully write them off your losses against the taxes you owe.

This kind of failure doesn't require FooCorp to fail. You could have options at $5, execute at $100, and have things go liquid at $7-- ignoring taxes this would have been a $20k gain. But with the taxes you're still $246k in the hole.

The issue all along wasn't that someone needed extra money. The issue was the potential huge losses. If it weren't risky you could find a lender to cover the execution price and taxes in exchange for a return when the asset becomes liquid. (E.g. having to pay the $266k up front but getting it returned later when the asset becomes worthless and you write it off)

If anything this makes the situation worse by encouraging more people to commit financial suicide by making it less obviously a bad idea while being just as risky as it always was.

How does this relate to the push for startups to change from a 90 day to 10 year exercise window? It seems like that's a better option than this bill since it gives employees a larger time window to make an exercise decision, during which the likelyhood of options actual resulting in something liquid is much higher

> Only startups offering stock options to at least 80 percent of their workforce would be eligible for tax deferrals, and a companys highest-paid executives would not be able to defer taxes on their stock under the legislation.

How would this affect the concept of phantom stock options? I worked at a startup who used the main excuse of no taxes paid handing out ghost options instead of normal options.

"Phantom stock can, but usually does not, pay dividends. When the grant is initially made or the phantom shares vest, there is no tax impact. When the payout is made, however, it is taxed as ordinary income to the grantee and is deductible to the employer."

TL;DR: Deep Learning will become a commodity. Software will eat Deep Learning too.

I'd like to clean up a bit the air from the hype fog:

DL is giving amazing results only when you have big sets of labelled data. Hence it will be much cheaper for companies to buy Google/Microsoft Vision/Audio REST APIs rather than paying the costs of: cloud + find data + deep learning experts. So, I don't think we will see a massive growth of DL gigs.

Except those areas where your own CNN implementation is needed (automotive, industrial automation), Deep Learning will be another "library" in the ever increasing Software Engineering mess of gluing many open source libraries and REST apis to get something useful done. You need 1 guy training a Neural Network for every 100 software monkeys maintaining the infrastructure complexity.There are now many Software Engineering jobs because it's hard to glue and maintain publicly-available code to solve some specific business problem.

I think the the same applies for many Data Scientist jobs, which are these days more about fetching/cleaning/visualizing data than making machine learning on it.

Even the "machine learning" search on Indeed, with 9K+ results has 1300+ from Amazon, followed by a much smaller number (in low hundreds each) from Microsoft, Google, others (including some that look like staffing companies).

I completely agree with the idea that being able to use some deep/machine/statistical learning is going to be a toolset that data hackers need to have. I even think that there is a bit of the "build it and they will come" magic waiting out there.

But I think the best way forward is to be working in data and figure out how to generate value with deep learning - this will be much more productive than trying to seek out a deep learning gig in terms of promoting deep learning in the workplace. Heck, that's a suggestion I would be wise to take myself . . .

My question is, it feels like machine learning is reaching its "Rails" stage. You can implement the latest Bi-directional NN or LSTM-RNN using a high level API that already sits on top of another high level framework. Even beyond the core setup it will do the peripherals - smart initializations, anti-overfitting, split up your data, etc.

Do people who implement (albeit real, useful) deep learning systems, but who have no formal machine learning background, who don't really know much or care about implementing derivatives or softmax functions because the frameworks abstract all that away - are these people getting offered jobs?

I actually stole this advice from the epilogue of some text about programming, and it really stuck with me. Otherwise your expertise is just too generic and you compete with a big pool of people who call themselves machine learning experts, because they can write a for loop in Bash.

>Speaking of math, you should have some familiarity with calculus, probability and linear algebra

Curious to know if anyone has had success learning/re-learning these as a mid-20s or older adult who works fulltime, and if you could potentially provide a list of books/courses to go through. I personally never learned anything past geometry (in high school). The most advanced math class I took in college was College Algebra. That means I never learned trig or anything past it (so no calc, linear algebra, or probability), and I'm sure most people on HN surpassed me math-wise sometime in high school :)

I've been able to skate by with my embarrassing lack of math knowledge/skills as a developer, but I feel like it's only a matter of time until the mathematical steamroller becomes a serious threat career-wise and I get crushed.

i just want to be a software engineer without having to continually burn away evenings and weekends studying the latest shiny, continually for the next two decades, just to keep my career afloat. is that even an option anymore?

This is applied deep learning. There's a ton of jobs available for taking someone's library on GitHub and applying it to a bunch of data. But other than DeepMind, FAIR, Google Brain, Open AI, Vicarious, and Microsoft Research, who is hiring for theoretical machine learning? That's what I'm interested in developing better algorithms that eventually approach AGI.

IMO, Twitter is the poster child for the tech bubble. They have users, which is their only claim to viability, but notably, they have never made a profit. Currently valued at around $10 billion with 350 million active users, that's about $29 per user. You'd be hard-pressed to find an investor so foolish that they would invest $29 in each of their users and hope to make it back if it were stated in those terms, but people have rushed to invest in a company which only has users, and whose attempts to monetize users through advertising have correlated strongly with loss of users. There can be little argument that Twitter's price has become entirely detached from its value.

That doesn't of course, mean you couldn't make money by investing in Twitter. You can make money by investing in overvalued companies as long as you don't hold onto your share until it busts. One profitable route would be if Twitter does get bought by a larger company. The market as a whole will lose on Twitter, but local maxima can be more profitable than the whole.

But at a personal level, don't be naive about this. A lot of people are investing, not just money, but time and energy, in Twitter or startups like Twitter. If you find yourself thinking that Twitter is a company with any real value, you should take a step back and evaluate whether you're being wise, or whether you've fallen prey to the unbridled optimism of the tech bubble. Twitter's position as poster child for the tech bubble makes it a good litmus test for people's understanding of the industry, and I suspect it will correlate very strongly with who loses everything when the tech bubble collapses.

Can someone actually explain to me how the situation came to this point where it practically looks as if Twitter's fate is being decided and played out in the media via endless speculation? It is not like Twitter is a tiny company with an unknown brand, few users and no possibility of improving its profit margins. I am not aware of what they are trying to do, but at the same time it is not as if they could have exhausted all the possibilities. Remember Facebook's beacon? That failed, but FB still managed to repackage the same crap into something more lucrative didn't it? Is this just impatience from stockholders?

For example, let us just say, hypothetically, something really damaging comes out about FB (e.g. the news about the fake video view metrics) and advertisers start fleeing from it. Wouldn't Twitter be the beneficiary of at least some of that exodus? Do they really have no option of an end game?

Google has a bad story with attempts at social media, apart from YouTube. (Bought Orkut, killed it, tried Google Plus, went nowhere). Twitter is hard to make profitable without alienating the users with too many ads.

For Google, it would probably be an acquisition like YouTube. With the knowledge that it might never be profitable, but intended to get control over a significant asset. But sharing Google infrastructure and resources could probably bring down operating costs in the medium term.

I think Twitter's recent foray into becoming a content streaming source (see: NFL) is very interesting and a natural next step, albeit a late one. The user base is already there to essentially compete with Twitch and other streaming providers.

Seeing a lot of arguments about profitability that don't make sense. At Twitter scale, profit sensitivity to even minor tweaks in ad rate/targeting/placement is massive. They could also go into 'maintenance mode' tomorrow and break a massive profit (it would just be stupid).

Active users is a poor metric for Twitter. It's much more about the views. A relatively smaller number of people on Twitter can command an outsized influence. It's a fundamentally different kind of network.

Twitter's future will probably be more about monetizing its viewership. It's definitely not going to disappear anytime soon.

Is Google the suitor, or, is Alphabet the suitor? The article says Google, but this could be out of habit. I am not sure that answering my question changes much about the news. But it might say something about how Alphabet views Twitter based on who they decide owns the acquisition and where Twitter would fit within the company (as subsidiary or under Google).

Anecdotally I use twitter to advertise my SaaS monitoring product, Cronitor, with far more success than we found with AdWords. The ad platform feels easier to use, and promoting content on Twitter is less of a time investment vs selecting, culling, and optimizing sets of keywords.

Would it be a bad idea for Twitter to charge a small yearly fee for use? The reason why I think this might work is because some users are very loyal and might not mind spending $20/year for an advertisement free Twitter service. They have about 350 million users, according to a comment here, and if 50 million users would stay, that would be $100 million revenue per year, and with many fewer users, their cost of doing business would be reduced, but with reduced network effects the service would not be as valuable for users. I like Twitter and I would pay $20/year in return for no promoted tweets.

By any normal metric, Twitter is a huge success. 300 million people find their service useful.

But it was already huge success when it had no business model. Moreover, what is fundamentally valuable about Twitter to its users--sharing and discovering little bits of textual expression over a publicly visible social network--is not very expensive.

From the perspective of the users who find it valuable, why does it need a for-profit model at all? Why can't we just subsidize it as a non-profit via grants and donations, a la Wikipedia? I'm pretty sure you could do the important thing that Twitter does--ignoring all the extras devoted to figuring out how to extract more money from the data--at a small fraction of its $2 billion in revenue.

I'm not being naive here--it's quite obvious why things are the way they are. But there many examples out there of making a big impact while making a decent living (just without anyone trying to become a billionaire). Social networking is ripe for more of this approach. The attempts so far have failed not because of their business model, but because of the usual reason: poor execution.

Perhaps off-topic, but does anyone know why Twitter gave up on its effort to monetize its API? There was a moment, circa 2010, when that seemed like the obvious move. When Twitter first began to shut down access to its full firehose, it seemed clear that there were businesses willing to pay for its information. But Twitter suddenly turned away from that idea, and focused on advertising. Considering how many sites compete for advertising dollars, it seems crazy that Twitter felt that was the right way to go.

But then Facebook went down the same path, first promoting its API, then largely giving up on any attempt to monetize it.

And before that, way back in 2006, I tried to build a business that would rely on Technorati's API, which they briefly promoted, then gave up on.

There are a lot of companies that make money by selling information via an API. And there is tremendous competition for ad dollars. These 2 facts would lead me to expect more companies might try to make money from their APIs. But what happened in Twitter's case?

Might sound strange, but I think this would be a great purchase for Apple. They have the cash, they certainly have the engineers and UI skills it desperately needs. iMessage working brilliantly, but closer integration with a Twitter style feed makes real sense to me.

I think this is brilliant. Even the press details seem perfectly crafted, with one article referencing Evan's "supermodel girlfriend."

Snapchat can win here based on brand alone. The hardware features are a plus, but they're going to sell a lifestyle. Think GoPro + Versace. Commenters here are caught up in the tech. It's not the tech. Get a few celebrities in these, people will buy them and barely use the recording features. They're cheaper than Ray-Bans and I bet you and half of your friends own a pair of those.

Snapchat can assemble an AR powerhouse from the ground up with brand goodwill. Evan and his team have figured out the best market strategy to do so. Google is not "cool" and could never attempt to pull this off.

I have tremendous respect for Evan Spiegel right now. Bold move. Amazingly positioned. I wish them the best of luck. Dare I say, it has the scent of Jobs to it - the vision, the risk ("we make sunglasses now!") and definitely the "cool-factor." Don't misinterpret - this isn't the iPhone, not yet anyway, but I think they're on to something very big.

I think this is significantly better than what Google did with Google Glass.

It's better because it focuses on the one thing that is really easy to do well. It does not try to do everything at once. It doesn't try to give you apps in your glasses and everything under the sun. This is the right approach to products. Do one thing but do that well.

Before you criticize me think back to the original iPhone, it didn't start with an App Store and everything under the sun like the iwatch did. And yet the iPhone is an icon and the watch is no big deal.

Hype and grumbles aside, I believe optimizing the "I want to record what I'm seeing right now" to a tap near your temple is pretty compelling. Fumbling to get my camera out of my pocket, or even just grab from tabletop and swipe-to-cam is often long enough to miss that precious moment with my daughter.

Why? You need to seriously question the motives behind such a launch. IMHO:

[1]Snapshot is an online multimedia application.[2]The infrastructure required to move from online to hardware requires significant investment (beyond the $1.8B they recently raised) - that of which I don't believe Snapchat can fund without a serious re-monetiziation strategy beyond Ads. It is only a matter of time before FB makes the move into Snapchat's market more than they already are.[3]This is an unproven market. Google tried it and didn't succeed. A better play - let someone else test the market a bit more and then move in with a solid Ad monetization strategy around the Spectacles. [4]Why Hardware?! Seriously? I believe Evan is overplaying his hands with so much VC capital coming his way.

Even thought I'm not "inb4" Glass comparisons this really does hit a market that I think is untapped. I used to have a "flipcam". It was before I had a phone with the ability to take HD video and before a GoPro was a choice for me because of cost (I still don't have a GoPro).

The ability to have cheaper, stylish, handsfree video recording of my POV has a lot of potential. How-to videos, the "capturing memories" as noted in the article, even just easily recording benign life experiences (police stops, for instance) seamlessly and without hassle is huge.

I do hope there is a tattletale light or something so that the average user can't surreptitiously record things and otherwise easy privacy controls... and I hope it's not long before someone hacks this or they unlock the product to do more than 10 second clips...

If I were GoPro I'd be nervous.

Edit: Actually a second thought- this would be a lot better than body cams in a lot of situations (or certainly a good companion) because it would capture the officer's line of sight.

Just like Google Glass users being called Glassholes, SnapChat glasses will probably be called something like SnapChads, because only white rich guys in pastel shorts and rugby shirts named Chad will use them. The aesthetic just isn't there for wide adoption.

Being someone in the AR space, I find this a smart but risky move. If they're marked right and become "cool" I'll definitely have to cop a pair (and at $130 they're almost disposable). Spectacles will make it way easier for me to post to Snapchat at parties/concerts/etc without having to break out of the moment by taking my phone out. Strategy-wise, this is a Trojan horse into the AR hardware space, which Evan has wanted to get into for years. However, they fit way better into Snap's image of being a media company vs. directly launching an AR headset.

If this means I can go to a public performance and no longer have to try to look past the sea of upthrust arms and glare of 1000 brightly lit screens to see what I came to see then it can't come quickly enough!

Particularly since I feel it will inspire the next product which is an IR flood light that renders all digital cameras useless, since there are so many people oblivious to the fact by trying to capture the experience for themselves they're detracting from the experience for everyone else.

Letting people who need a digital memento silently get one without intruding on the experience of those of us just there to enjoy and be in the moment is a great compromise.

Snapchat has a huge opportunity in its hand which it has limited to take full advantage of: starting a revenue share program with influencers on the platform. Facebook has yet to do it and Snapchat, which is strapped with VC dollars, can attract a lot more influencers to its platform. I think the companies on the Discover are already in some sort of revenue sharing agreement with Snapchat but brining this to the massive number of young influencers unlocks huge opportunities for Snapchat.

I'm amazed the top-rated top-level comments are all so positive. We have enough people shoving cameras into devices and situations where they don't belong. At least we know what they look like now so we can ostracize anyone wearing them.

Well I'll be completely straight and say this isn't anything new. (You've been able to buy similar video glasses from china for about 5 years now) but if it can properly integrate with the app, and slim down a LOT more. To the point the camera is unnoticeable - they could finally start making some money. Well, until the Chinese knockoffs start rolling in

I think what people are overlooking is that this device has stereo cameras by default. That means every snap likely has reasonably quality depth for each snap. With the scale of users they will likely have the largest consumer based depth capture platform in the market. That's actually a big deal for building infrastructure needed for the AR ecosystem.

For those wondering wtf are these, I don't like the styling, why do these exist...etc, well, i don't think the target market for these is hacker news viewers. I will say that they do look awesome. Way easier to use these than a go pro or hold a camera/phone. Hopefully it's not just locked down to Snapchat.

This article mentions Snapchat's hundreds of employees and multiple offices. This is one of the most obvious examples of the "what are they all doing?" question for me. I know it must take quite a few people to run operations at that scale, and of course they have an advertising business too, which likely explains the need for multiple offices. But it seems like Snapchat is still an extremely minimal app with only a couple of extra features being added over the years. Instagram had only 13 employees when it was acquired, so what role are most of these people in?

I like it. Seriously, "creepy" is just a word that means "I can't accept the reality doesn't work the way I'd like it".

That said, I worry about implementation. My guess is that it's going to be directly and permanently tied to Snapchat itself. Which significantly reduces the potential usefulness of this product - not everything you record is something you only want to have sent directly to Snapchat. Personally, I want files. Plain, old files. Is that so hard to understand for all those cloud-first companies?

I don't understand why all these software companies are in a rush to make hardware. With the lone exception of Apple, all hardware seems to resort in a race to the bottom commoditization resulting in paper thin margins.

There's been an empty store on exchange place in NYC financial district (near Tiffany's) that for a couple weeks has had a huge Snapchat logo taking up the entire window. I wonder if they're also gonna explore retail along with hardware.

Maybe Snapchat will sell some of their users' videos to porn companies (for VR porn)... There are two cameras - Obviously for VR; and given Snapchat's history as a sexting app, I think it's clear where things are heading here.

I can't be the only one who thinks this is going to eat GoPro's lunch, am I? Sure the initial version may not be as high quality as a GoPro and the time limit isn't as good but those are easy things to fix and they have a monstrous social network (something GoPro is sorta trying to break into).

Where are those 10 second videos stored? At Snapchat, on the phone, into the glasses? That changes dramatically the privacy implications of both the glasses and Snapchat. Remember what he said: he watched videos from one year ago. Snapchat has been all about deleting everything now.

I think Apple needs acquihire snapchat and promote Evan as the new Apple CEO. I have zero hate for Cook and think he is great CEO. But Evan is shaping up to have some the most modern product prowess out there. I don't know if these spectacles will be a hit, but I think his choices are in the right direction.

Are they able to darken the lense glass to hide the camera a bit? Maybe they could match the black of the camera sensor to the black of the glass a little more. Otherwise it looks a lot like two cameras on your face.

This is a ridiculous product... reminds me of the classic upper management/CEO "ideas". You know the kind: obsolete, neglects societal concerns (security???), nobody around to tell them it's a bad idea.

> (Why make this product, with its attendant risks, and why now? Because its fun, he says with another laugh.)

Sometimes you can look at something and just KNOW that there is not a chance that pile of junk is gonna gain traction.

This is exciting for the wearable headset market. If even a fraction of Snapchat's users get this it will normalize the space much more than Google Glass was able to. This is especially considering the young demographic Snapchat caters to, which I assume is more open to new technologies.

Here's the process that works (with some persistence) even if you don't have a revolutionary product:

1. Go to Google, toggle to news and enter the name of similar startups

2. Go through each recent article and add the journalist to a spreadsheet

3. Go on Email Hunter or Email Format to find how the publication formats their email addresses to guess the journalist's. Journalists also tend to use firstnamelastname@gmail.com for their personal emails.

4. Email your pitch in 3-5 sentences max. Don't just describe what your startup you does, use an interesting angle or story to show its impact.

Instead of: "We do delivery logistic optimization."

Story: "Why the heck is your technician always 5hrs late? Cause it took forever to fix the issues of the guy before you.

We're helping our customers like Comcast and Oracle smart schedule all their appointments based on data like a) how long issue x typically takes to fix and b) real-time traffic conditions.

In high school, I worked taxi dispatcher, seeing firsthand the inefficiencies in coordinating drivers."

Journalists don't want to advertise your startup for free, they care about writing a story that entertains and educates their readers. Feed one to them.

Small tips i learned when doing interviews (as journalist and as tech founder):

Ask the journalist what kind of story s/he wants to do and what role you play in this story.

It doesnt matter if it's about you or a general topic - understand what kind of basic arc/message/pov s/he wants to do. In 99% of the cases their job is not to be "investigators" but tell entertaining stories and they are usually happy to share their idea of the article.

- Help getting to the content s/he needs. Share the right infos, the right contacts, industry insights. Essentially help as much as you can creating the ideal article.

- Also make sure to create soundbites that work as quotes. Quotes tend to be highlighted in articles.

- Make sure to connect the journalist with more useful contacts and help finding new ideas for articles.

- Last but too often forgotten. Have good press images ready of you and your product. Some that work portrait, some that work in landscape. Some that show a business look, some that show a more personal look (depending on the story the journalist goes for)

There is a lot of great advice here. I started as Larry Ellison's handler almost 20 years ago, wrote a book on publicity (Barbara Corcoran endorsed) and teach an online PR class.

Media has changed a lot in the last 5 years. It is important to familiarize yourself with the outlet and how the journalist writes. (ie) is the outlet known for listicles? (article to start and then '7 ways to crush it on a start-up budget.' Or, do they prefer pitches based on their editorial calendar?

Identify your target customer and find out what they read, listen to and watch.

If you have a tech product, focus on where a more technical audience might be like podcasts. You'd be surprised but it's not always the most well known outlet like Mashable that will drive sales or users. While that is great credibility and exciting, there are many opportunities out there.

RESOURCE:HARO www.helpareporterout.com is a good resource to sign up for - free opportunities 3x a day that journalists are posting. Since you will have the "lead" already, keep your response short and to the point.

NEWSWORTHY:To make something newsworthy, look at what is trending in the media. (ie) Angelina Jolie and Brad Pitt getting divorced and then think about all the relevant angles.

Angles could include being a divorce attorney and contacting your local news to talk about the issues they each face or if you have created a divorce app that helps with custody sharing, etc... you could discuss how that would work.

We are in an era of high content consumption online and outlets like Forbes, Entrepreneur and Huffington Post rely on contributors. Forbes, for example, turns out 300 articles a day ... a DAY! There are more opportunities for your company to be featured now than in the past so that is good news for you.

Try not to get discouraged. It can take time to figure out what works and come back to the journalist with different angles. I've found consistent follow up works.

I know that I was rambling a bit but hope that helps.

I have some free info on my site www.rachelaolsen.com if you're interested including audio interviews with a Forbes contributor and a writer for US Weekly, Men's Health and Rolling Stone.

Warm intro's work better, but I'd also wager that good cold emails work as good. For example; if you can connect with the writer about one his articles, and transition into your startup..this is a good 25% chance to get covered (assuming your story is good).

Don't just submit a "tip" or go to the contact page, actually find the person who has written similar stuff and find his email

I just participated in a panel on this very topic. The takeaways were to know who you're pitching, build a relationship, and be honest and succinct. If you have a good product relevant to that publication's readers, a good news editor or writer will pick up on that.

I have often wondered if it would just better to get straight to the point and outright bribe journalists. Most are making a pittance and a few thousand dollars in cash handed under the table should make any startup story come out like it is the new Uber.

Flybrix is having a very successful launch today driven by a PR strategy that goes against Steps 2 and 3 of this advice. We hired a great PR firm to manage contacts and we got blanket coverage everywhere because of a press embargo.

I don't think I could have managed this on my own and on the timeline we did it.

It's common knowledge around here that PR is wasted effort for startups. So why do this?

For startups and for any company, brand makes things nebulously easier. Sales require one fewer call, hiring pipelines are slightly more full, fundraising intros are easier. The point about creating News is really at the core: if you can make News one of the consistent outputs of your company, and you can see the results of News on your actual work, then you should do it.

Like everything else at a startup, brand is one tool. Don't use that tool unless the founders are strong with it and there's a well-defined path between that brand and traction.

I wanted to introduce our startup SnapEDA to the HN community. We recently completed Y Combinator, and have been quiet about the platform while we've been improving it. With that said, wed love to get feedback from the HN community!

Our goal is to build a canonical library for making circuit boards: one trusted, centralized place to get digital models. These digital models include PCB footprints, schematic symbols, and 3D models. The library exports to a growing set of popular EDA tools: EAGLE, Altium, KiCad, Cadence OrCad/Allegro (Beta), & Mentor PADS (Beta).

The library is free because we believe in making this data widely accessible to enable innovation. The purpose of this new feature, InstaPart, is to give designers an option to "skip the queue" and get a part quickly if it doesn't yet exist in the free library. Once that part is made, it is then made available for the entire community to download for free. Growing the library is a top area of focus, so we hope to eventually render the InstaPart feature obsolete and just have everything available natively. :-)

In terms of standards, all new libraries are being made to IPC, and we also source models by partnering with component manufacturers. To ensure quality, we have an automated verification checker on each part page that provides a pass/fail result on common manufacturing issues that we plan to expand with additional checks.

I'm a professional hardware engineer, and doing this kind of work is something I do frequently and begrudgingly. Having this kind of service available will be a huge help. Some thoughts:

- Who writes the style guide? How do you make aesthetic decisions?

- Will you support multiple symbol styles? Would it be possible to upload stylesheets or specially annotated schematics and then regenerate already-extant parts in that style?

- Is there any intention of making the file writers open-source? Altium, in particular, has a stupid and annoying file format and it would be a gift to the community to be able to write good PcbLib and SchLib files. It would make it easy for me to write a linting and style-casting tool.

- Is there any chance of bringing down the latency, possibly with the application of more money? My rule of thumb is that it's worth spending about a hundred dollars to save myself an hour. A typical smallish library part takes me about five or ten minutes if I have to do both schematic and footprint, so waiting a day isn't really attractive at any price. But if I could throw money at you to get a result turbospeed, that'd be worthwhile.

I have mixed feelings about the custom footprint service -- I've worked with ~900-pin BGA SoC-type parts, and paying $30 to do that would be a no brainer, but paying the same for a 8-pin LDO would be a tougher sell -- maybe scale prices with part complexity?

The tougher sell to me is trust / verification of the InstaPart models before the community can vote them up or down. For most teams that I've been on, the most time-consuming part isn't really the pinout generation, but rather the checking of large parts (often 2 engineers checking pin-by-pin to ensure that footprint matches data sheet). I'd be much more comfortable using it if you outlined how pinouts are verified before sending them out to the customers.

I hope this works out! Making footprints is a huge PITA, and I'd love to be rid of it.

As a designer, this looks like a great convenience since making a symbol and footprint usually takes a half hour on average. The 3D feature is the most useful part as a proper model takes much longer but at $79 dollars it could be expensive for people like me who design boards with many ICs and and unique components. With that said, in a very time constrained project with a lot of new components which we have no symbols for, if I could select and buy everything I need in a packaged deal, that would be appealing.

This is actually quite useful, as an occasional user of a variety of layout packages, once in a while run I into that one rare part which isn't in a library, and end up wasting an hour finding the specs/measuring the part and figuring out how the component editor works.

There's the whole section at the top about availability and average price, but no link to go buy it? In addition to the InstaPart revenue, are you also going to make money with affiliate sales to parts sites (a la Octopart)?

Why? Making schematic symbols and PCB footprints is not time consuming, and at least you'll have the proper paste mask. Altium has an IPC footprint generator wizard, and 3Dcontentcentral has lots of user contributed parts in STEP format.

Seems like an interesting service. I'm kind of confused by the site though.

On the landing page it says, "Get any schematic symbol and PCB footprint delivered in 24 hours. Just $29." Then I did a search for a part and clicked request and, after signing up for an account, it said that to get it in 24 hours I need to pay $79 and $29 was for 5 days service. I also somehow ended up on a page at one point that said that you could request any part for free. So which is it really?

I also found the social network aspect of the site off putting particularly since there was no mention of that and then after I signed up for an account to give the request a part service a try I see my user name plastered on the sites front page in a feed of recently signed up users and deactivating my account doesn't remove that.

I think your footprints should be shown with dimensions, because before I drop hundreds or thousands of dollars on a run of PCBs, I'm going to have to check your footprints to make sure they are correct, and that could actually take me longer than drawing them from scratch.

I strongly advise everybody with one day free (and not much better things to do) to implement a basic fully connected feedforward neural network (the classical stuff, basically), and give it a try against the MNIST handwritten digits database. It's a relatively simple project that learns you the basic. On top of that to understand the more complex stuff becomes more approachable. To me this is the parallel task to implement a basic interpreter in order to understand how higher level languages and compilers work. You don't need to write compilers normally, as you don't need to write your own AI stack, but it's the only path to fully understand the basics.

You'll see it learning to recognize the digits, you can print the digits that it misses and you'll see they are actually harder even for humans sometimes, or sometimes you'll see why it can't understand the digit while it's trivial for you (for instance it's an 8 but just the lower circle is so small).

Also back propagation is an algorithm which is simple to develop an intuition about. Even if you forget the details N years later the idea is one of the stuff you'll never forget.

This is well-written and I applaud any step toward demystifying the sometimes scary sounding concepts that drive much of the ML algorithms.

Knowing you can pretty quickly whip up a KNN or ANN in a few hundred lines of code or fewer is one of the more eye-opening parts of the delving in. For the most part, supervised learning follows a pretty reliable path and each algorithm obviously varies in approach, but I know I originally thought "deep learning? ugh, sounds abstract and complicated" before realizing it was all just a deep ANN.

Long story short: dig in. It's unlikely to be as complex as you think. And if you've ever had an algorithms class (or worked as a professional software dev) none of it should be too daunting. Your only problem will be keeping up the charade if people around you think ML/AI is some sort of magic.

This is actually part 3 in a series. For developers who are still getting oriented around machine learning, you might enjoy the first two articles, too. Part 1 shows how the machine learning process is fundamentally the same as the scientific thinking process. Part 2 explains why MNIST is a good benchmark task. Future parts will show how to extend the simple model into the more sophisticated stuff we see in research papers.

We intend to continue until as long as there are useful things to show & tell. If there are particular topics you'd like to see sooner than later, please leave a note!

I took Andrew NG's ML class on Coursera. It was certainly interesting to see how ML works but I'm not sure what to do with this. Particularly, I'm still unsure how to tell beforehand if a problem is too complex to be considered, how much data it'll require, what computing power is needed.

Are there a lot of problems that fall between the very hard and the very easy ones? and for which enough data can be found?

So this may be a place as good as any -- I've got a decent math background, and am self teaching myself ML while waiting for work to come in.

I'm working on undertstanding CNNs, and I can't seem to find the answer (read: don't know what terms to look for) that explain how you train the convolutional weights.

For instance, a blur might be

[[ 0 0.125 0 ] , [ 0.125 0.5 0.125 ] , [0 0.125 0]]

But in practice, I assume you would want to have these actual weights themselves trained, no?

But, in CNNs, the same convolutional step is executed on the entire input to the convolutional step, you just move around where you take your "inputs".

How do you do the training, then? Do you just do backprop on each variable of the convolution stem from its output, with a really small learning rate, then repeat after shifting over to the next output?

Sorry if this seems like a poorly thought out question, I'm definitely not phrasing this perfectly.

Wow, Finally! I have been using ST3 for several years (wow, years) and always wondered what is keeping the developer from labeling that version as stable. From all the issues reported here [1] I have never encountered one while using the editor for pretty much all my work. Those $70 are definitely worth every penny. Sometimes I cringe from videos featuring ST while using a non-registered license, this week it happened with a course from Google engineers via Udacity, Google engineers!!! As if they don't have miserable $70 to buy a license, I assumed they were in a rush and didn't have time to set the license which I hope they bought.

I wish the Sublime Text people open sourced their code. I'd buy it from them in that event and I'd finally have a text editor to recommend. Atom, VS code, and anything else is completely blown out of the water by ST. There's a reason it's still around and it's because ST is the only thing that can even think of doing what sublime text can do.

Good work to the people behind it, it's an amazing feat no doubt. Just please consider making it free software for all of us who care about that just a bit. Amazing work none the less.

I really, really wish it was open source. I understand why it isn't, but with its main competitors being Atom and VSCode, it's hard to warrant using a closed source text editor even if it's so much faster and I'm used to it.

When I first started using Sublime, I disliked the occasional popups, and thought I'd just keep using it without paying $70 for a text editor?!?!

But I HAD to buy the thing! Not because I wanted to avoid the annoying popup, but because of everything we know about Sublime today; performance, simplicity and intuitiveness of the UI, packaging system, etc.

The article mentions that they're coming out of beta in the near future! nice! and I just noticed they're already mentioning sublime text version 4 (under sales FAQ page).

I'm surprised so many people here are using Sublime to edit >100 MB files. Yes, it handles them (as long as the lines aren't too long), but it always has to load the entire file before displaying the first line. Aren't there some editors that don't have to do that?

On a related note, large files are often binary. I appreciate that Sublime can display binary files but it's pretty bare bones, and there's no editing support. I'd love to see what Sublime HQ could do if they worked on binary editing support for a couple of milestones. For example, the ability to locate and edit strings in binary files would be cool, as would a basic hex editor.

> Also new in 3124 is Show Definition, which will show where a symbol is defined when hovering over it with the mouse. This makes use of the new on_hover API, and can be controlled via the show_definitions setting:

Is this just an API hook which a plugin can add a definition resolver to, or does this automatically find definitions for all builtin languages? If the latter, this is super cool!

A number of people expressed the need to edit large files. For the development of my own editor[0] I would be interested to know what kind of usage patterns most often occur. What are the most important operations? Do you search for some (regex) pattern? Do you go to some specific line n? Do you copy/paste large portions of the file around? Do you often edit binary files? If so what types and what kind of changes do you perform?

sublime is by far my favorite editor. fast and lots of plugins. specially if you work with big files. i sometimes need to work with files larger than 150MB and it takes few seconds to open. atom crushes and can't even open the files.

Here comes a piece of history.I replied this to my Sublime Text 2 purchase confirmation email I got from Jon Skinner on 2011/08/30:

> hi jon,> > my salary was reduced by 30% just yesterday, but when i woke up today,> they 1st thing i did was purchasing sublime. it's that fucking awesome!> i wish it would be open source, so people could learn from it...> but, hey, i doubt many open source developers could contribute quality> code to it.. :)> > if u could implement the elastic tab stop feature (which has some reference> implementation on the nickgravgaard.com/elastictabstops/ site), then i> would be happy to pay another 60bucks for it.> actually, u could sell separate license for the version which has this> feature...> i know it would be quite elitist, but it worked well with the black macbooks> back then...

On Mac I use TextWrangler for quick editing and VS Code as the IDE. I never need to open super large files so after reading this discussion I tried to open a 177MB text file in TextWrangler and it opened quickly and was editable. Searching within the file was also super fast.

The addition of Phantoms [1] is the killer feature in this release for me. This will allow embedding custom HTML [2] inline in the editor, which is something I've been dreaming of - the power of Atom's nice plugin UIs with no compromise in speed!

Been using Sublime Text 3 for years (and I do have a license), and been trying out Atom/VSCode lately. Atom can get real slow, but I feel like the extensions for Atom are of higher quality (linters, TypeScript integration). I think it might have something to do with HTML/CSS/JS vs Python for plugin development.

One minor feature request - can you please simplify the setting of font size for the tree browser and for the menu entries. (I know that tree browser font size can be set by the theme, but it is a bit non trivial using PackageResourceViewer to patch theme files to do this). I still havent found a way to change the font size of the menu entries.

I suspect still ST3 still being in Beta is a service to ST2 users, who paid it but for a relatively short time before first st3 betas and whose license key will be valid with ST3 betas and not ST3 once out. Not that I am in this case at all.

- no engagement with the developers. For $70 I expect to be able to file bug reports and maybe some feature requests. Without being banned.

- multi file search is ridiculously poor. I can't save search patterns, the long text box with all the file patterns is hard to navigate (on OS X if you put the cursor at the end it starts scrolling), but most of all the result pane doesn't stick as it used to. I have to search again every time I click on a file from the results then close it.

- copy and paste is STILL buggy on OS X. Sometimes you paste a string and it puts it in the line above the one where you have your cursor.

- package control is not included. It's just common sense

- the scrollbars are invisible on OS X. I don't want a minimap, it used too much space and adds too much noise

- I use BracketHighlighter. Every time I want to customise the highlight colour it's a royal pain in the neck because of ST3's crazy architecture

People always talk about how SO is so terrible, and yet it's still the best resource by a mile. People have just forgotten that the lack of that carefully curated environment created the complete mess that was all the forums we used to have before.

Sure, it's not ideal - SO could do better in terms of helping people understand the site's goals and enforcing the rules in a less hostile way, but they are working on that (a lot of the more hostile rule enforcement tropes are banned and filtered against), and it's a hard problem.

SO isn't dying any time soon, and the content is still good. If you want to kill it, please go ahead and solve the issue of explaining to users how to contribute quality content and getting them to take that in instantly.

I used to contribute a lot, which tapered off as I had other things filling my time. Towards the end though, the help vampires were getting to me, and I understand why people are harsh towards new users in some situations, it's an easy trap to fall into. Trying to fix that problem by stopping the curation of the content is insane, however. That's a fast route to going from some people being turned away to having no decent content.

What people don't understand is that SO serves literally all developers. No matter what choices we make there will be upset people. Do we reduce moderation? Help vampires take over. Do we keep moderation as is? People complain we are too strict (or not enough). Do we allow only English? We are elitist. Do we allow SO in other languages? We are fragmenting the community.

I've found that if you go to Stackoverflow for a fast-moving topic like Javascript, the first answer isn't correct in modern terms. The next set of answers, without check marks, are likely more correct.

So you have to read the entire page to get a glimpse of the truth.

This can be bad if you're just looking for an immediate solution, however for learning its fantastic because it shows you a recorded history of how things evolved over time in a framework.

While StackOverflow is still an outstanding resource, they never really solved the why bother? problem for the really difficult subjects.

Your choices are: answer a trivial JavaScript question and get a million points, or spend an hour painstakingly explaining a hard problem for maybe one point (where, half the time, the original submitter doesnt even bother to accept your answer so you lose even that small point boost).

There needs to be some scaling factor. For example: if the number of questions anywhere on a certain topic is quite small then popular answers in that area could receive more credit; or, perhaps individual parts of a post could be voted on to increase value (say, you see a really detailed example so you tip that answer for taking the time to write it all out).

> However, a 2013 study has found that 77% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions.

Is this actually a problem?

I've always viewed Stack Overflow as a "write once, read many" community. For top questions, it looks kind of like Quora.

Checking my own stats: I've been a member for 7 years, asked 4 questions, and given 45 answers while reaching ~86k people [1]. My karma is not super high at 936. However, I've gleaned so much value from the SO community (and saved so much time), that these numbers only begin to scratch the surface.

Using these numbers as metrics doesn't reflect the value of SO's knowledge base. I don't think Stack Overflow is on the decline at all.

StackOverflow is a community driven website for a society that is used to follow rules and be disciplined.

Nowadays, the popularity of the website put it in a bad position. Handling millions of impolite, pseudo-developers, who've heard that it can help them with their problem.

In other words the community and popularity changed, not StackOverflow. And no it's not dying, it's just working! Sorry for some of us that remember the times where questions were mostly high-quality, but I don't think there is a way to prevent collateral damage in this case.

I prefer to read "opinionated question", instead of 10 paragraphs about a problem that in the end is unsolvable by logical decision.

On the opposite end of the "new user" perspective that is trying to ask a good question, as someone who is seeking to sometimes answer questions, it is pretty hard to actually find a good question. The vast majority of the questions I run into _are_, in fact, duplicates, poorly worded/incomprehensible, far too broad, etc. (one example that has stuck with me is "how do I install HTML/JavaScript on my computer?").

Though perhaps there are good, legitimate questions out there, it is also conceivable that some frustrated new users think that their question is appropriate for SO when it is not. This most often happens, from what I've seen, in the "far too broad" category. Just looking at SO right now, for example, I saw a question that was asking how to pass data on an iOS app from one place to another. In that user's mind, he/she has probably been trying to figure out the basics of making an iOS app, and this seems a legitimate question. However, this is an incredibly broad architecture/design question. SO isn't a resource to hand-hold you when you're learning something new. It's a resource for asking specific questions when you can't find the solution anywhere else (and you've actually tried).

If your question isn't clear, it's not possible to get good answers. If your question is based on a fundamental misunderstanding of the technology you're using, the best possible answer is "You're fundamentally misunderstanding the technology you're using."

Stack Overflow encourages users to edit other people's questions for clarity and formatting, which I think is helpful for a lot of new users that don't know how the site works yet. And my experience has been that when there isn't enough information in a question, you tend to get comments asking for more, which is productive.

I do agree that Stack Overflow can be a bit daunting for new users, especially because you're not allowed to comment right off the bat. I believe the threshold is 50 reputation, which can be hard to get early on, because questions get answered so quickly on Stack Overflow that it's hard to find questions that still need an answer.

Consider this on the "I keep finding closed and interesting questions on SO" - the "what is the best linter for PHP" or "What is your favorite cartoon" or "where is the best place to meet female programmers (for romance)" (ref http://i.stack.imgur.com/x9ik2.jpg ) - why not ask those questions _here_?

If the answer to that is "because that isn't a good place to ask those questions" or "because HN isn't set up for answering those types of questions" then consider the possible response of "maybe SO isn't set up for answering those types of questions well either?"

When there is more noise than signal in a question and answer page, it is useless. Go dig around https://community.oracle.com/community/java and consider why you don't put site:community.oracle.com in your search (or for that matter, see how well one can find the answer to an error message in /r/javahelp). When there are dozens of answers that consist of "try libXyz" it isn't useful - you're going to have to dig through each of those to see if it works or not for you... and you might as well have done a google search instead.

When the questions are "how do you make a triangle with '*'" a dozen times over in September, those questions need to be closed so they don't waste the time of people who are trying to find good questions to answer.

I'm a bit divided about Stack Overflow. On one hand side it's simply one, if not the best, resource for programmers. On the other hand it's become somewhat toxic and counterproductive. The better you become as a programmer the less value you derive from it. The true niche experts are less and less to be found (ie Product/Project -owners and Microsoft/Google/Apple etc -employees) and the other replies will often be exasperating mixture of trying to give answers they've googled or complain about some meta-aspect of the question.

Any community that reaches a certain size will face unique problems and I think Stack Overflow has some of the same problem as reddit does: you have to be very careful on how you give power to users. Power corrupts and becomes a goal/game in itself. Karma/power is a great incentive in the beginning of a community, but can become destructive in the long run. That some programmers have a certain type of personality is probably not helping either.

There seems to be many smpvar on Stack Overflow that loves to wield the small amount of power they've accrued without actually contributing that much. On the other hand you have to enforce rules and curation to keep quality up.

It's a very fine balance and hard to get right. It's mostly about human psychology and incentives. I think there's some tweaks they could do to improve things but I also understand that from their perspective why change something that works?

The danger is that the true experts stop helping/answering questions on Stack Overflow because they find the community becoming too toxic. Might turn into a downwards spiral until there's mostly trolls and newbies left.

I found myself a little incensed at this article, rather than by it. Yup, there are limits on new users' privileges. Yes, there are users that play The Reputation Game. Yes, there are many questions that don't get answered to one's satisfaction. Yes, there are trolls. Yes, there are disillusioned users. Yes, yes, yes.

Even so, I also find all of this to be Perfect-As-Enemy-of-Good whinging. Instead it could've been a plea full of suggestions. A Call To Action!

So let's do that now, though I may come off as a prick here because I kinda think all of this whining is the real problem of the internet.

> The privilege limits

IMHO they're are rational and reasonable, but your specific use-case was about new folks not being able to leave comments. So how could we solve this? Perhaps new users ought to be able to, but only the author'd be able to see them. When either the Q or A author upvoted it, it'd become visible to the world?

> Troll responses to your bad/incorrect/misleading answer

That sounded like a bummer. You know you can flag these comments already so... punish these bad actors in the provided way and move on.

> Respond to comment that says my question is a duplicate (its not, which I clarify to avoid closed as duplicate)> [...]> Another issue with this is that duplicates show up despite the crotchety moderators complaining about it.

You can't have your cake and eat it too. It's casual, usually-helpful internet folk doing their (usually) altruistic best to help. Most of the time, this works great. Blog posts like this point out the exception, and are valuable, and they're very enticing clickbait. But when they offer only boo-hoo's and no ah-hah's, move on.

What most people fail to understand (but was explained very well by Joel in some talks) is that SO is primarily optimized for Google & read-only users just looking for the answer to a common issue. By that metric they are extremely successful.

Many things people complain about are deliberate design choices that actually made SO popular in the first place.

There are obvious exceptions however I have come to understand the greatness that is SO. All that is needed is a well-thought question with a little bit of work to show on the side.

I have a theory about why many complain about SO (please don't comment about this line, there are obviously exceptions):

There has been a ridiculous sense of entitlement with the growth and recent appeal of tech jobs in the past 5 years.

All this crap about "trolling" getting out of hand, not enough diversity (THE FIELD WAS PRIMARILY FILLED WITH NERDS OFTEN LACKING ANY SOCIAL SKILLS, no one else wanted to look or hang out with "that guy who is good with computers") etc.

It's a field that was mainly driven out of the desire and enjoyment that would be had messing around on CPU's. Therefore most of the good ones (among the diluted masses of "experts" nowadays) spent a great deal of time on these things. I'm not surprised that somebody would get pissed off if another came around and started asking for the answers to things without any real effort or drive being shown.

SO will forever be a poor resource for the huge incoming population of coders.

Frankly I'm tired of reading these pieces. For one thing, they always focus on being mean to new users, which in my opinion isn't the problem. My biggest complaint about Stack Overflow is that you get the same amount of points for answering an easier question or a hard one, so complex questions languish while "please write a regex for me" questions get five answers in half an hour.

One person has a few bad experience, says it's in decline. I'll counter his bad experiences with my positive experiences of both asking and answering questions - and finding more answers there anywhere else.

The article is probably right about some factors that explain the "77% of users only ask one question".But IMOHO it miss an important one:Most of the user find answer to their question only by searching.

In fact once you get rejected (by a troll or not) for asking a question that you would have been able to find on SO, you became more careful before posting.

Also don't forget that a huge majority of the net user are ghost / read only :)

Point based moderation generates toxicity, hacker news gets toxic too. Stack Overflow went all in on moderation before they understood the social consequences of it. There's an opportunity here for a social network with a moderation system that cares about how it makes people feel and how feelings impact user generated content.

Stack Overflow is useful, but a lot of that usefulness comes from its complete dominance of search results. By keyword stuffing in the sidebar (and last I checked the nofollow links) they maintain a strong search presence for virtually every topic.

So, this position as top tech knowledge hub is sort of artificially propped up at this point. You type in your query, arrive at a page with something vaguely like the question you asked, are faced with pedantic flags about how the question you came to have answered is somehow unfit, and maybe some useful info. Also maybe some outdated info with real unaccepeted answers below it.

I'm not aware of another site that so heavily depends on content it itself seems to consider unworthy, siloed into so many vaguely overlapping sub-sites

To me this reads as if the author is not asking questions in a way consistent with Stack Overflow's guidelines [1]. They make these guidelines really explicit and clear in the FAQ.

> How do I ask a good question?

> Search, and research

> Write a title that summarizes the specific problem

> Introduce the problem before you post any code

> Help others reproduce the problem

> Include all relevant tags

> Proof-read before posting!

> Post the question and respond to feedback

> Look for help asking for help

However, writing questions in this way where you provide a MCVE, explain everything you've tried, and relate it to existing questions to help reviewers is time consuming. It shifts part of the time burden of a good question onto the asker vs reviewers or early answerers.

As someone who's done many reviews on Stack Overflow, I think following these guidelines is the best way to not get downvoted or flagged.

> However, a 2013 study has found that 77% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions. With this article, Im exploring the likely reasons for that very, very low percentage.

"77% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions."

Old SO user, started during the beta (user:2092), never asked a question. I don't bother with SO because a) sub-standard login (oauth), b) the time to answer a question correctly with sufficient details means it eats into time I can do other things; c) SO mods/response are mostly arseholes.

I also hate how original authoritative documentation is drowned out on google with crappy code examples or empty questions, locked, down voted or ignored.

None of the Stack Overflow moderation irritations matter to me in the end.

All that really matters to me is that SO has employed a reputation system that provides a strong quality signal for answers to questions that I have. This allows me to quickly assess the quality of any given answer based upon the reputation of the answerer and the upvotes that the answer has received.

All other SO problems, including any aggravations encountered while trying to give back to that community, are relatively insignificant.

A bizarre problem I have had on SO is that, after years of gaining reputation into the 5 digits, now if I ask a question on a topic that is new to me, I often get the response that I should know better if I have a high reputation. It's weird. If I have my colleague ask the same question, a guy with much less reputation, the answers are helpful. It's as if the quality of the question is judged by how much the asker must already know. Which is ridiculous if you explore a lot of different technologies.

Then there are genuine trolls who, despite massive reputation, seem to have the sole goal of proving everyone wrong. It's also weird.

There is a problem with giving moderation powers to entrenched individuals who are not experts.

I've seen this on the English language stack exchange, the Japanese language stack exchange, the Physics stack exchange, and on stack overflow. The people who are there on the site gathering points and up and down voting aren't necessarily experts, and in many cases they're amateurs or people who know something about one thing, yet have moderation powers over things they really don't know about.

The same applies to the dustier corners of Wikipedia, where entrenched non-experts often reduce articles to the level of their own ignorance. Since nobody is getting paid for their participation in these sites, it's hardly surprising that these people end up predominating.

Sure, Stack Overflow kind of sucks sometimes and people are sometimes really up their own asses about things being exactly right and having a question worded explicitly, but it is by far the best resource for programmers to find help from other programmers due to its ubiquitousness in the programming community. I haven't found any other alternatives better; Quora isn't useful for programming questions (a general Q&A/opinion style question is better suited for Quora, but not "What's wrong with my code?"), and I wouldn't use ExpertsExchange. Are there any other notable places to ask these kinds of questions?

So it sucks, but there's no resources to replace it. You might not like it and because it's not something that you have to help with you are free to stop contributing (e.g. you can't just say, "I don't like this, I'm going to stop working on it" at work), but why not help everyone and contribute, either to Stack Overflow or by making a new, better resource, rather than being grumpy and only helping your pessimistic self feel better?

An aside: The header image lags considerably when I scroll (on Safari 10)... Why isn't it just a static image on the page, and have JS to detect when the aspect ratio changes on a page level rather than an image level?

I've answered 21 questions, got upvotes on 15 answers, 3 answers accepted by owners, asked 2 questions by myself and both were upvoted. I have no idea what these people in the article are talking about. SO is super friendly and super helpful.

I worship SO like the all-knowing deity it is, but I don't love it. It's a cruel god, like that of the Old Testament. I've never contributed to the site, though I've used it for years, and I never will -- simply because it's too arcane and silly and I don't wish to play the reputation game just to leave a simple comment or whatever. When a new god appears, I'll surrender completely.

I've never been a user of SO, i don't find it useful except really obscure stuff; I always reach for docs first and those usually have my answer, and typically a better answer.. There are many occasions when co-workers will come and say "LOOK HERE's THE SO ANSWER THE GOLDEN KEY" and that answer many times was just wrong.. or half the time didn't apply at all to what was happening

I recently replied to SO question with a "New feature have been added that supports your use case: link" that was genuinely solving a problem.

Reply got removed by a moderator saying that I need to describe solution and can't just provide the link. I didn't bother. Currently the only answer suggests sub-optimal solutions and I am not replying to SO questions again.

It felt like a land grab - it wouldn't help the questioner, but it would help SO, by bringing more data to their platform.

Every user that has high karma/reputation/whatever on a site - including this one - should be forced to experience it from a noob's perspective once in a while. It can be a real eye-opener, as the experience for the in-group and the out-group can be almost totally different. How many times have even regular users of a site - not just noobs - pointed out a problem only to be downvoted, harassed, or even banned by the "senior users" who don't see the problem precisely because they are so senior? They're like the "senior architects" on a software project, who no longer contribute actual code (or novel opinions on this side of the analogy) but always sit in judgment of others'. I guess it's a universal human tendency, but the point is that karma systems should be designed to attenuate it instead of reinforcing it.

A large portion of my rep comes from questions I wrote that were (considerably later) marked "closed", for various reasons. All of them continue to accumulate thousands of views and the occasional point or two... that's never made much sense. Kind of a statement on how nonsensical StackOverflow's become.

It feels almost like the variables in their little machine are out of tune. If they scaled back the free privileges and hire some trained moderators they could probably clean things up a bunch.

Glad someone brought it up., I had guessed it was only the case with me., it's still a great resource to 'search' answers., but posting your own... only would lead you to humiliation with down-votes.

What I had guessed and stopped posting any question, that they only want to answer anything that's general... not 'specific' at least at this point, i still use SO a lot but only search for possible answers, I have never dared again after being trolled and downvotes.

The one thing missing from most sites is the meta-evaluating of the mods, the way slashdot had. That was one of the more innovative things about slasdot back in the day.

It would randomly ask users how accurate a mod was for a particular mod that he or she made. I'm supposing that if enough people voted against the mod, their mod would be reversed and privileges removed.

Places like Reddit and SO need this in order to control the mods themselves.

SO is an invaluable resource but I don't ask questions because of the hostility there, plus a lot of times in thinking about a question to phrase it properly I end up answering it myself.

But one of the major problems that I see there are the people who comment solely for the sake of commenting, it seems like they must get paid by the word they comment so much and it's never helpful, always snarky and irrelevant.

I'm a top 4% user for the year apparently, I've asked 40 questions & posted 318 answers. SO is a minefield.

It's an excellent resource and I've found so much value out of it, but the moderation is hyper-aggressive at times and often duplicate marks are for old behaviour in old libraries. I think it needs a clean sweep at some stage, leaving the current content but drop everyone back to 0.

I agree to a lot of this, despite the criticisms being seen as cliche.

I've found the reddit programming communities to be more helpful as a user, even though finding similar enough content is difficult. Further, I've noticed reddit and blogs edging in on SO's Google results. I wonder if this will be the trend.

SO has been great for so long because the content was trustworthy. The trolls are removing that edge.

It's natural that complex, evolving organisms have more and more entropy but their decline, as for what you HNers are saying here, is due to the unavoidable predominance of more and more web-idiots, mostly coming from 3rd and 4th world countries, which I however understand in full: they are looking for recognition as human beings in a global world, guidance to make their talents work and opportunities in richer environments while starting from nothing. It's the same reason a lot of web-idiots, me among them, are writing here on HN from 1st and 2nd world countries. That said, noise is reduced by stricter filters or by changing the definition of noise and your attitude towards it. Your choice.

It's funny, because many of the points made about SO in this article remind me of my experiences with HN. My first couple of comments were downvoted, as I did not yet know the HN etiquette and unspoken rules of discourse, and that can be pretty discouraging for newcomer. I'm still pretty reticent to comment for fear of being downvoted.

There is also the familiar rush to be the first to post some news story, which is almost impossible.

All that said, I've had extremely overall positive experiences on both SO and HN. The breadth of collective expertise and depth of comments on both sites is really awe-inspiring. I view the strict rules of engagement as a feature, not a bug.

I've idly wondered how I could restrict queries to questions that have been closed as too <whatever> or not enough <whatever>, as those are often the most interesting and educational items, even if they don't answer my specific question.

So far. I still haven't participated in SO very much. For the most part it isn't even worth asking a question. (It's a good thing and a bad thing) Also I have noted that they try to prevent you from deleting. After you click that submit button it's their property apparently.

I think a part of SO's problem is its size. There are many other smaller communities in the Stack Exchange network that seem to be more friendly. The size of SO prevents it from becoming a community as such, which manifests in the lack of shared norms and ethics which can then be meaningfully enacted in practice.

I've some very bad experiences with stack overflow. They are very hostile towards new users and the 50 point system that doesn't allow commenting is very absurd. If I ask a question, I should be able to comment for a clarification right?

I wonder if they ever A/B test their point system. I only use stack overflow when Google points me to it.

I think SO can do a much better job at making users actually ask/answer questions rather than just use it as a read only site.

One interesting issue with the duplicate answers is that if you notice later a typo in your answer and edit it you get pushed to the bottom of the answer stack, despite having the first answer. This seems like an issue SO could solve to help contributors. I doubt many abuse the system to completely rewrite an answer and stay on top.

I am not sure why but I am very luck with StackOverflow, almost always. It might be due to the fact that I ask detailed questions that are 99% of the time valid questions and I am using languages that has welcoming nice community and usually not extremely popular so trolling is minimal.

The complaint is essentially describing how it's turned into Wikipedia, where the author is describing a space where "working the rules" is more important than trying to achieve the goal those rules are meant to enable.

There is an air of arrogance in SO that is unpleasant. Help is supposed to be about being open, friendly and relaxed, not arcane rules, criterias and deciding you know best.

If experts are uptight and arrogant the desire to learn quickly vanishes. Being friendly does not preclude being firm.

I have noticed many questions being closed in an arbritrary manner and worse in a mean spirited way. The first may be ok but the second is simple unacceptable. Who are all these people on power trips and why does SO allow it?

Let's be honest, Stack Overflow and that network of sites has been in decline for years. Sadly it has turned into a vast and wide content farm and SEO ploy that is full of search spam and and users who copy/paste material from legitimate sources. To top it off, Stack Overflow uses "nofollow" on the outbound links to make sure the true source of material gets no credit in the eyes of Google, Bing, etc. Rinse and repeat for years, and spammers have taken over the asylum, with some trolls for good measure too, and that's unfortunately much of Stack Overflow today.

How did this happen?

In short, they appeared to receive preferential treatment from Google after complaining very publicly and loudly about not ranking at the top of search results. Ever since then they have widely dominating search results for any vaguely related technical query. Ironically, at that point in time their primary complaint was about other content farms scraping their content.

>> JS: All of these sites that go to Stack Overflow, scrape our content, and reprint it with garbage ads, Google Adsense-encrusted pages.They're basically producing worse versions of our pages and they use these slimy SEO techniques, so they actually rank higher than us.

>> For a long time, we were getting enormous complaints from our own users that they'd search on Google and they'd find Stack Overflow content that had been stripped from its useful form but SEO'd like crazy and encrusted in ads and thrown up willy-nilly. And these sites were getting a lot of traffic. So that was his complaint and of course he phrased it in this larger frame of "Is Google losing their edge, etc. etc.?"

>> BI: It got a lot of attention. Do you think Google's doing a good job of fixing this sort of problem?

>> JS: They fixed it. They called us up at the time and said, "Thank you for bringing that up. You have lit a fire under the team that is supposed to be working on that problem that has not been delivering."

Matt Cutts, then the head of Google Web Spam, posted to Hacker News about this to "fix" the problem of sites outranking Stack Overflow.

>> As many of you know, DaniWeb was hit by a Google algorithm update back in November 2012 and we lost about 50% of our search traffic. In investigating the issue, I discovered that DaniWeb, in addition to most other programming forums out there, all lost their google traffic to StackOverflow.

That, in turn, perpetuated the reposting/scraping activity and blatant spam posts to the StackOverflow network, since the site network ranks dominantly in every vaguely related query. Now years later, an even larger volume of material on Stack Overflow is not original content and doesn't even pretend to be. It's absolutely littered with copied/pasted content and blatantly spammy/promotional posts from around the web.

From the outside looking in, it appears that Stack Overflow has become exactly what they once actively complained about.

I'm for more hostility. There should be more difficult to ask questions, there are way too much people asking idiot questions there. Allowing this makes the site worse for people who really have important questions.

I might get downvoted for this, but here's a story.We just finished picking a brand name, after 2-3 months of intensive work.

Being fond of .io's I naivly googled my <brandname>.io, and found that park.io owns it - this happened last week. I immediately sent an email to inquire. We considered the price, and then when I came to buy it today, a week after, the price is tripled. This was a fixed price domain, NOT bid.

That's clever price manipulation. Detect when someone wants something, let it sit, and when they're ready - triple the price. Maybe that's a hint for how he made so much money? In any way we'll just do the get<brandname>.io or something like this, as a compromise. Thanks for being a douche, park.io!

I don't understand some of the negative comments here. This guy built a million dollar business in a year providing a service that people want to pay for. He did it all on his own with no other co-founders or employees. I say "Congrats!"

NIC.IO now has backordering. In order for park.io to continue being successful with landing and selling premium domains, he must be appraising the value of the domain and his chance of selling it in one of his auctions. Then weighing that against the NIC.io backorder price of 60EUR (67.35USD) + 60EUR registration fee and finally backordering it himself far enough in advance before someone else does (because only 1 backorder can be placed on NIC.IO).

I find it shocking that you would post on HN; "Hey guys, I make $125k/mo making other peoples lives harder".

Given the way your current infrastructure is configured (vulnerabilities and all)... somebody could probably cost you ~$30-70k/mo in AWS resource utilization at a cost of ~$600/mo. The moment you park on the domain of someone who shares your internet ethics, that will be an interesting day for you.

I don't understand how this can survive in the long run - what's stopping someone else from setting up an identical service at lower prices? There is literally no lock in because users can sign up for multiple services and potentially pay a lower price (depends on which one snags the domain).

Jealousy and cognitive dissonance. This guy is more successful than them, so clearly he's done something bad.

Welcome to HN: home of the insecure narcissists who like to argue over programming languages, humblebrag about their gifted childhoods, and prove that they're superior to anyone more successful than them.