I think a lot of people are looking at this as "maybe this article is offering advice" instead, the way I'm looking at it is this is giving us a rough quantification of what we all know intuitively: that college is so expensive it's reaching a natural ceiling.

The way this manifests in reality isn't millions of people all doing a cost/benefit calculus like this and coming to the rational conclusion they can skip college. What happens is that slowly, the meme that "Jim went to college and he doesn't seem better off" seeps into the collective consciousness. More and more people start running into this evidence, and reconsider mortgaging the house (figuratively) to send their kids to college, and the upward pressure on college tuition starts to lessen.

After a while this meme that college is a tradeoff becomes well established, and it becomes common knowledge that you think hard about it before you send your kid to college. The underlying reason is something like "You can make a better return in principle investing in the S&P" but the way it becomes a force in the real world is by a collective bayesian reasoning process we all engage in as a society.

As a culture, we really need to stop telling 17 year olds to not worry about money, go to college, and figure something out. There is always someone ready with a romantic appeal to a classical education, and it is so frustrating for me.

Wasting four years is a huge cost. Years, decades of debt is a huge cost. Going to college with no plan about money? The costs are assured.

Plus, the degrees people are actually getting aren't necessarily worth all that much to the educational romantics. Business administration is what it is.

* "Transaction Processing: Concepts and Techniques" by Gray and Reuter.

* "Fundamentals of Wireless Communication" by Tse and Viswanath.

* "Genetic Programming: An Intrduction" by Banzhaf et al.

* "Applied Crytography" by Schneier.

EDIT: A few additional comments:

(1) Although these books are problem-domain specific, some of them had benefits outside of their problem domains:

* The Dataflow book has some great coverage of fixpoint algorithms. It's really helpful to recognize when some problems are best solved by fixpoint analysis.

* The "dragon book" takes a lot of the mystery out of compilers. That's somewhat helpful when writing code that needs to be fast. It's super helpful if you want to work with compiler-related technologies such as LLVM.

Working Effectively with Legacy Code by Michael Feathers. It's a bit hard to wrap your brain around the Java and C++ examples unless you have experience with them, but the techniques are timeless. You may need to practice them extensively before you understand how important they are, though. In a recent book club we did at work, a common complaint was, "This just looks like common sense". Indeed it does... though the common sense is uncommonly hard to find when you are staring at the actual situations this book helps you with.

Programming Pearls by Joe Bentley. And its followup. It's old but it does get you thinking about things.

I'd also recommend The Linux Programming Interface by Michael Kerrisk as it teaches so much about what makes modern Unix what it is but.. it's arguably quite oriented around C by necessity. It's not a "C book" by any means though.

"The Design of Design: Essays from a Computer Scientist" by Frederick P. Brooks [1] is language-agnostic and worth reading.

It's about software engineering but also about hardware and some different kinds of design outside of IT.

From an interview about the book [2]:

> Eoin: Your new book does talk about software design in places, but its really about design generally, and the case studies span buildings, organizations, hardware and software. Who is the book aimed at? Are you still writing primarily for people who design software or are you writing for a broader audience?

> Fred: Definitely for a broader audience. I have been surprised that The Mythical Man-Month, aimed at software engineers, seems to have resonated with a broader audience. Even doctors and lawyers find it speaks to some of their team problems. So I aimed this one more broadly.

Brooks is also the author of The Mythical Man-Month which is often mentioned on HN.

'Implementation Patterns', Kent Beck. A semi-language-agnostic extension of his 'Smalltalk Patterns'for how to clearly and consistently express what you're saying when you code.

'Facts and Fallacies of Software Engineering', Robert Glass. Glass presents a list of things everybody knows, or ought to know, and gives both academic and opinionated support and/or critique for why they are and aren't so.

Unless you have some understanding of your system's architecture, how it's run in production, and why a production environment is Really Different and a Big Freaking Deal, and how operations is supposed to look like, you'll never be an effective programmer, no matter whether you run your own operations in a small start-up or work for a large enterprise with dedicated operations teams.

- The Elements of Computing Systems: Build the virtual hardware and software from scratch. Software includes writing a compiler in a language of your choice, so agnostic in that sense.- The Art of Metaobject Protocol: Extremely insightful treatment of OOP! Alan Kay called it as the 'best book in ten years' at OOPSLA 97.

Not a book per say, but "Out of The Tar Pit" by Moseley and Marks is definitely a must-read.

Abstract:

```Complexity is the single major difficulty in the successful developmentof large-scale software systems. Following Brooks we distinguishaccidental from essential difficulty, but disagree with his premise thatmost complexity remaining in contemporary systems is essential. Weidentify common causes of complexity and discuss general approacheswhich can be taken to eliminate them where they are accidental innature. To make things more concrete we then give an outline fora potential complexity-minimizing approach based on functional programmingand Codds relational model of data.```

I've been thoroughly enjoying "Designing Data-Intensive Applications" by Martin Kleppmann. It primarily deals with the current state of storing data (databases, etc) starting with storing data on one machine and expanding to distributed architectures...but most importantly it goes over the trade-offs between the various approaches. It is at a high level because of the amount of ground it covers, but it contains a ton of references to dig in deeper if you want to know more about a specific topic.

Definitely, definitely, Kernighan and Plauger's 1976 book _Software Tools_. The code is in RATFOR (a structured dialect of FORTRAN) but all the ideas are language-independent. It remains, four decades on, the best book I have ever read on how to solve the real problems of real program development. Very practical, and covers a vast amount of ground. (As it happens, I am re-reading it right now.)

One of the most influential programming books I've ever read. The code is in Java, but it's east to follow even for a non-Java developer, and the truths are universal. Learn the most fundamental design and encapsulation patterns. Uncle Bob Martin is a legend. This book has probably made me tens of thousands of dollars.

Joy of Clojure & SICP. To a lesser extent, Learn You a Haskell. 7 Languages in 7 Weeks is an excellent good baby step book if these are too daunting. 7in7 was my first intro to many new ideas.

Any language worth learning has this property of influencing the way you think forever. TDD, Code Complete &co are all very integrated into mainstream industry and are no longer novel. If you find yourself needing to recommend your colleagues to read Code Complete you might consider working on the skills to get a better job.

It's a collection of programming exercises I used when I taught introduction to programming. They start out incredibly trivial, ("prompt for a name, print "hello [name]" back to the screen. But the trivial part is, in my opinion, the fun part when you work with a new language.

That program is a two line program in Ruby. But it might be much more complicated if you implemented that as your first GUI app in Swift for iOS.

I wrote the book to teach beginners, but I and others use those exercises to learn new languages. The book has no answers, just the problem statements.

An Introduction to General Systems Thinking. Gerald Weinberg. This book, now over 40 years old, addresses the 'core within the core' of the reality of systems. Unbelievably good, with a very light-hearted tone.

If you consider C to be language-agnostic, here are some gems. These are personal favorites as much for their excellent writing as for their content.

The Unix Programming Environment was published in 1984. I read it over 20 years later and was astonished at how well it had aged. For a technical book from the 80's, it is amazingly lucid and well-written. It pre-dates modern unix, so things have changed but much that goes unstated in newer books (for brevity) is explicit in UPE. (Plus, the history itself is illuminating.) It gave me a much deeper understanding of how programs actually run over computer hardware. Examples in C are old-school and take a bit of close reading but oh so rewarding. https://www.amazon.com/Unix-Programming-Environment-Prentice...

Programming in the 1990s by Edward Cohen. A rather practical introduction to the calculation of programs from their specifications. Plenty of introductions to computer programming involve guessing your program into existence. This is one of those rare books that give a solid, pragmatic approach (with examples) of developing software from solid, mathematically sound specifications and avoiding errors by design.

Even if you don't adopt formal methods in your day-to-day work (often we're not building sky-scrapers) it's a useful book to give you insight into the kinds of questions one should be asking and thinking about when designing software systems.

Let Over Lambda. Not entirely agnostic, but delves into Forth, Smalltalk, C, Scheme, and Perl while overall being about Lisp. Fascinating book; really a look at metaprogramming (macros) and closures (that's what "let over lambda" is).

I see you read Kent Beck's TDD book. A good follow-up might be Roy Osherove's "The Art of Unit Testing." I found it to have a lot of pragmatic, practical advice. It's not the final word, but it is a good next step after Kent Beck's book. It has some C#-specific material, but that stuff is interesting to read about even if you're working in other languages.

I have been reading Game Programming Patterns lately. It explains the design patterns with examples from games, and it is really well written by an engineer at Google (Bob Nystrom): http://gameprogrammingpatterns.com/

After I complete this book, I think I'll read his other book: Crafting Interpreters. This one teaches about implementing a programming language from scratch, once in Java and a second time in C.

Practical Object-oriented Design in Ruby is a great read with a lot of advice on approaching design problems, approaching refactoring and thinking about how to model. It's in Ruby but I feel a lot of its advice is general.

Robert C. Martin introduces the disciplines, techniques, tools, and practices of true software craftsmanship. This book is packed with practical adviceabout everything from estimating and coding to refactoring and testing. It covers much more than technique: It is about attitude.

"Practices of an Agile Developer: Working in the Real World" - this book was like the Bible for me when I started my career in IT 10 years ago. I re-read it multiple times and I still stick to the practices described in this book. They are language agnostic, they are pretty clear and easy to follow and they can really improve your skills.

Sipser's Theory of Computation. It covers automata and languages, computability, and complexity - and is brilliantly written, the proof style in particular: clear 'proof idea's followed by the details that can be easily skipped if you're not interested, or it is clear from the 'idea'.

It's not language-agnostic, but it's still a great book: The D Programming Language. The reason I recommend it is because Alexandrescu is a great writer who knows a lot about programming languages and the kinds of tradeoffs that a low-level, practical, and safe programming language like D must do.

Even if you never intend to program in D, I encourage you to read this book to get a different view on metaprogramming, memory safety, and concurrency.

I've really enjoyed "Dependency Injection in .NET"- despite the name, the book itself is really 95% about Dependency Injection and relatively language agnostic. It exhibits a bottom-up approach to using inversion of control in a way that makes sense and is scalable.

I really liked "Building Microservices" by Sam Newman. It's a good review on current software architecture and software development process in addition to going over microservices. Honestly microservices are a topic in the book but it could just be called "Software Architecture in 2016".

Learning more and more about imperative programming, OOP, design patterns, etc is good, but branching out into declarative programming and the functional and logic paradigms will stretch your mind for the better.

The great thing, I think, about The Reasoned Schemer is that it tackles a complex topic with almost no prose. The whole book is basically one code example after another, in a Q/A style. "What does this do?" <allow you to think about it> "Here is what it does, and here's why." Rinse and repeat. I think more technical books should try this.

Growing Object-Oriented Software Guided by Tests, by Steve Freeman and Nat Pryce. The examples are, IIRC, in Java, but the ideas about TDD are applicable to any OO language. It'll make you think more about how you write testable code and the test themselves.

If you are writing code, you are doing yourself (and anyone using your code) a disservice if you do not read something on secure coding. There are not a ton of code agnostic resouces, but you may want to start with "Software Security: Building Security In"

I would then look for language specific options as well, because programming for security can vary a lot amongst languages. Writing securely for native applications running on a system is much different than writing secure web apps.

I like the 4 simple rules, I think he originally got these from Kent Beck. It is easier to keep this system in your mind if it is just a few basic principles.

Working Effectively with Legacy Code by Michael Feathers

As others also mentioned this. I think this is becoming more important as people transition to new jobs where they have to take on existing software. Having a process to deal with code that lacks documentation and tests is really important.

I loved cracking the coding interview, even if you want to practice for an interview or not, with this book you can get a friendly reminder of data structures and algorithms, time complexity and related topics (and you can improve your problem-solving skills too). For me at least it was useful for feeling again a great interest in these topics and to love even more my job.

Not exactly language-agnostic, nor about programming per se, nor a book, but [Google Style Guides](https://github.com/google/styleguide) offer a lot of specific, opinionated, practical advice that you can apply immediately. It's like an MLA manual for programmers.

Video compression is not understood well enough throughout the whole stack yet.

I recently got a 1080p projector for home use, so now movies / TV series in my home are viewed on a 100" screen. Content is mostly from Netflix and Amazon Prime Video.

Netflix does a really good job with encoding. I cannot say the same for Amazon Prime Video; even with their exclusive (in UK) offerings, like American Gods or Mr Robot, the quality of the encode is quite poor when viewed on a big screen. Banding, shimmering blocky artifacts on subtle gradients, insufficient bit budget for dimly lit scenes - once you become aware of the issues, it becomes really distracting.

OTOH a really big screen is a fantastic ad for high quality high bitrate content. Anything less than 2GB/hour is noticeably poor.

Remember when we did this ugly interlacing thing, so that we could get a higher (50/60fps) framerate?

When did we decide that 24/25/30fps was good enough? Now we have a Blu-Ray standard that cannot handle greater than 30fps, and media corporations that are unwilling to release content via any other medium.

Put that together with ever-increasing resolutions, and the amount of pixels something moves across from one frame to the next becomes greater, and video looks more and more choppy.

Franky, this is a much bigger problem than NTSC ever was. Even with content (The Hobbit, Billy Lynn's Halftime Walk) being created at higher framerates, users have no way to get the content outside of a specialized theater because the Blu-Ray standard cannot handle it, and because people seem to honestly believe that higher framerates look bad.

I suppose we can only hope that creators take better advantage of digital mediums that do not have such moronic, and frankly harmful, arbitrary limitations.

It's interesting to notethat the architecture of the first ISO codec MPEG (1)is almost identical to the one we have today H.265That codec was standardised in the late 90sSo this design has carried through for about 20 years.Most of the changes relate to the targeted parameterssuch as frame size, frame rate and bitrate.Only the last step 264 --> 265 seems to have added new features.

first example interlacing image is wrong, shows running dogo with simulated division into scan lines, but does not take into account timing difference - that was one of the mayor sources of deinterlacing artifacts. Alternating fields are 1/60 second apart in time.

My uncle died suddenly this year. He was unbelievably caring - and not just to family - but to everyone he ever met. His funeral was jam packed with everyone from homeless people to executives of multi-billion dollar companies.

I always thought that his ability to always have you, and whatever you had last talked about with him, on his mind at any moment was some kind of supernatural gift. I was surprised to find out at his funeral that he actually kept an excel spreadsheet of everyone he met and what they needed and were going through. He reviewed this constantly.

It didn't lessen his genuine love for everyone, just let him be a little more super human.

Founder here. Here to answer any questions you might have. The site is not perfect, it's not mobile optimized, there are probably bugs, there is a gazillion features missing, no APIs but it's a labour of love, open-source and I hope it will help people other than me. I want to grow this product but I need to know what you need, people.Edit: sorry for the bugs I see on my server popping here and there. Didnt expect that much users and traffic.

IMHO: You want to pivot this product, now, to compete with ourfamilywizard.com. OFW is a great concept but the site runs slow, its search and reporting is erratic and basic, and the UI can be difficult. It is, however, the only game in town for managing divorced families and its about $200 per year. It also features:

- Timestamped and hashed communications and records.- Lower price point than OFW.- More intuitive reporting.

This will NOT have widespread appeal under its current use, and will be tough to make money from.

Quick note to the founders: You need to add more genders. A lot of early adopters in the Bay Area have friends who are non-binary or are non-binary themselves. I don't want to be forced to mis-gender people who don't identify as male or female.

Something self hosted, but with integrations with gmail would be very useful for me (and something I would donate too). Push/pull from google calendar, push/pull emails from a contact, grouping people, sending unique emails to each member of a group. That kind of thing.

Call me cynical, but I find the term CRM pretty wrong. Yes, I understand what it does, people have been keeping track of this stuff (if I read the intro page correctly) for ages, and still I find this borderline creepy and overengineered.

Disclaimer: I am using e.g. Facebook's "On this day" feature to reminisce about old stuff with friends, I also keep birthdays in a calendar. Maybe just the professional spin puts me off :)

This looks superb, thank you for building a great product with a non-hostile privacy policy.

Before trusting my data/time with something, I generally like to understand the motivations of the creators.

Do you intend to run this as a profit making business, or just as an open source project? Do you have thoughts/opinions on monetization? How do you intend to continue to develop it in the medium/long term?

An app might be asking a lot, but a responsive design to allow mobile usage would be a big help. Or at least something that could hook into IFTTT or something so I can easily add info on the go via email, twitter, or something.

For example, let's say I'm at a party and meet some people I want to log. Mobile input is really key there because I may have been drinking and won't remember that info in the next 15 minutes.

I would like a minimal CRM for super-connectors. People like investors, promoters, etc. who traffic in relationships, and develop social capital from making introductions.

When trying to make an introduction, it's exceedingly hard to search your extensive network under certain criteria. Particularly if you want "fuzzy" matching, i.e. not just restricted to a specific geography or tag (e.g. "entrepreneur"), but looking for nearby geographies and tags.

[edit: Facebook has recently disabled Graph search features and it is exceedingly difficult even to figure out which of your friends is in a particular city.]

I like this idea. :) We could even collaborate if interested. For the very same purposes, plus to organize everything else I want to, I wrote and use OneModel (AGPL), creating inside it a calendar with ticklers, lists of gift ideas or other ideas for activities, etc, all sorted by when I want to see them or how I can most easily find them in a hierarchy. I have created a sort of structure for things I might to remember about each person (journal of past interactions, contact info, etc etc) that I also use for my dealings with some businesses so I can revisit who said what when, if needed. And it can auto-provide the structure in for future persons or organizations I add to my contact list, but only when wanted. Same with anything else I want to track.

And it creates a soft of personal journal for me as a side-effect, by exporting everything created (or archived) for date ranges, so my odd random notes fit in also. It lets you optionally mark things as public or private, export things as .txt outlines or an .html mini-web site, and (hopefully) soon exchange info with others if desired. Self- or my-hosted.

Unfortunately, OM also lacks a nice video and installation is still manual (some postgres config instructions then "java -jar...") until interest warrants a real installer. I use it for everything (no mobile support yet) and it is extremely efficient for a touch typist, and easy to learn as everything is on the screen in menus generated context-sensitively on the fly.

I get it - it's tempting to solve the Harvard Business School issue by sending the MBA students to prison before they wreck havoc on the economy. But I'm not sure their dads would pay the exceeding tuition.

Also, there is the whole question of whether it would be fair to the other prisoners. Pretty soon the prison economy would be infested with cigarette derivatives and yard swaps.

This is a cheap, sensational headline and an article that lacks depth and analysis.

At the end of 2006, there were ~160,000 people in prison "institutions" in the state of California[1]. The design capacity of those institutions was ~79,000 people[1], so the occupation was ~204% of the design capacity.

At the end of 2016, there were ~114k people in prison institutions[2], which was ~134% of the design capacity.

Obviously, if prisons are vastly overcrowded, and over time the number of people in prison is reduced substantially, the per-prisoner cost will sharply increase. There are no financial savings on infrastructure because institutional capacity is still vastly exceeded, and the savings in other areas will not be proportional to the overall drop in prison population because the people released early tend to be less expensive to imprison, as they tend to be incarcerated for less serious crimes.

Whatever one's political allegiance, the fact that the prison system in the state of California has been running at a minimum over 130% of design capacity for the last decade is a tremendously serious issue, and it feels trivialising to make a nonsensical comparison to the cost of university tuition, and to present the fact that per-prisoner costs have risen while prisoner numbers have fallen as anything less than blindingly obvious.

A discussion of the prison crisis in California seems completely worthy of Hacker News, but it shouldn't be based on an article like this.

I think the Prison Industry should be judged by their recidivism rate. The Prison Guards union in Cali is very, very strong (they were behind the '3 strikes' law, ensuring a lifetime of "clients"). Their pay and benefits should be tied to the recidivism rate.

Prisons should not be training grounds for future criminals, but they are today.

Also: prisons should be shuffled periodically, mixing up the population. That'll prevent the formation of criminal gangs inside. Outside, they'll be living in a diverse, mixed environment anyways; might as well get them started on that inside.

Think about that for a second...$75K/yr for abhorrent, yet improving conditions and you'll come to the correct conclusion that contractors are absolutely fleecing not only the prisons tax payers, but the prisoners themselves.

Half a lifetime ago, I had to spend a weekend in jail while visiting a friend in California (accused of theft by a drunk lady who couldn't find her credit cards and fingered me instead of realizing that she may have left it at the bar. The best part was when I had to fly back out for a court date, they told me they were dropping the charge for an obvious lack of evidence. This decision was made on the actual court date, so I got to waste even more money on airfare and travel ). I was surprised at the entrepreneurial zeal of those who have no issue profiting from misery and suffering. In the LA area at least, many former/older celebrities are investors or owners of prison supply companies. Most notable was Bob Barker's company which sold travel-sized generic toothpaste for $7. In this case, the price is wrong, Bob.

The fingerprinting machine was the size of a ultra-deluxe 70's Xerox machine, regularly needed service, and looked like it had a sticker price around 5 figures (a feature that is just an add-on to $500 phones.) I think that 10x-20x inflation is pretty consistent across the board in the American penal system. The collect calling system is also beyond ridiculous given the near zero cost of landline telecommunications and that most cell phones can't receive collect calls. The food you're eating is the absolute worst (in terms of taste of course) nutritionally. Nearly everything is processed and is done so in the cheapest way possible. When I say as cheap as possible, I mean that the $.49 Nissin Ramen is an actual delicacy (No exaggeration. Some of the inmates would pool their resources together and "cook" the ramen in a giant plastic bag with hot water that surely must be leeching pcb's and/or phthalates from the container.) After a few months of that diet, even the most physically fit people developed a weird type of gut and loss of musculature.I didn't eat anything while there, but I observed that the only nutritional guideline that could possibly be met that of 2K+ calories/day. I know it's not Club Med, but that type of diet is a blocker for any type of rehabilitation. It was depressing to look at and had the effect of making one more docile and depressed.

So while California may spend $75K per prisoner, the value they spend is probably closer to $7K. It's kind of brilliant in a sadistic way, as if the prisons, their programs, food, and environment were designed to maximize recidivism.

Now that I think about it, I wouldn't be surprised if some elements were designed in this way

Evan as a 20+ year network engineer, I don't think I've run across an article about networking that balances depth and breadth so well. All of the information presented is high-level enough to retain (at least as a big picture), but detailed enough to avoid hand-wavy 'magic networks' descriptions.

Bravo - well done.

Edit - Also worth adding that this article is a rarity in that the details are actually accurate! Even things I read in networking books and trades often have egregious errors - usually due to the breadth of the topic matter.

As a front-end web developer with no formal computer science background or traditional programming experience I find these kinds of articles extremely valuable. I like to understand as much as possible, at least conceptually, what happens throughout the stack even if I don't touch it. Does anyone have any links to anything similar? perhaps for the Linux kernel or other lower level systems but with a top down overview like this? Especially anything that would build on this article. Effects and unexpected phenomena that manifest in networks like this also would be interesting.

> In reality, our 5-volt CMOS system will consider anything above 1.67 volts to be a 1, and anything below 1.67 to be 0.

Worth noting that the region from 1.67 V to 3.33 V is undefined and systems in practice will not behave nicely for signals in this range. A CMOS logic 1 needs to be above 2/3 Vdd to be reliably recognized.

It's good, but... I wish it were more critical? Excluding the gross control plain (as it wasn't the focus), there's some awkward overlap between IP and Ethernet (link aspects).

My guess, that I'd love to see explicitly confirmed, is that it goes back to the internet as the internetworklingua franca between existing networks an idea predating the more technically-motivated concept of layed protocols providing compounding abstractions.

I don't want to sound like a nit of an otherwise great piece, but without criticism the history seems inevitable. Alternatives and hypotheticals are good to keep design space from atrophing in the face of collective amnesia.

I assume it's not mentioned to keep the article brief, but most devices these days support MTU sizes greater than 1500 bytes. Jumbo Frames[1] allow for ethernet packets of up to 9216 bytes.

Since they have to be fragmented back down to 1500 for devices that don't support them, however, it's typically only used in closed internal networks, like a SAN. People typically see about a 5% to 10% bump in performance.

This is great. Along similar lines, "Foundations of Network Programming" by Brandon Rhodes (and originally John Goerzen) is fantastic (and not just for python programmers as the python API is a pretty transparent wrapper over POSIX APIs).

I love this article because of the depth and detail which can be expected of his work, but also because you get all the way to the last sentence before he reveals the question which inspired him to do the deep dive.

I hate to be that guy. But I don't think this link was meant to be for the general public. Gary Bernhardt, the author of this piece, posted this link to this Twitter followers about 2 weeks ago to receive feedback. Remove the hash at the end of the URL, '/97d3ba4c24d21147', and you'll see you'll be redirected to purchase a subscription to Gary's screencasts and articles.

So if you are enjoying this article consider purchasing a subscription and supporting more work like this.

Sounds like this is more in line with what they did with ApplePay vs traditional credit cards--I.e. They give you randomized IDs each time so the other party can't track you from transaction to transaction. Adds can still appear but they won't know who you are, so it's a direct shot at Google and others looking to give people "targeted" adds based on user behavior. I agree it's an issue that needs addressed. Just because I searched for X two days ago doesn't mean i want to see adverts on X for the next two months.

This is great, but unfortunately, until Apple ups its browser security game, Safari is a non-starter. On macOS, switching from any other browser to Chrome is in the top 3 things you can do to materially improve your security in ways that actually matter in the real world.

Looks like this will stop (after 24 hours) some companies from doing an initial redirection to set cookies for tracking purposes... Example:

1. Search Google for hockey sticks

2. Click on search result hockeystick.com

3. hockeystick.com issues a 302 to adcompany.com which then issues a 302 back to hockeystick.com

Why the 302? Because in Safari, you could only access cookies in a 3rd party context if you've seen a domain in a 1st party context. Setting a cookie in adcompany.com in a 1st party context gives you the ability to read that cookie in a 3rd party context which could be used for tracking purposes.

They're just being a little sophisticated in how they block third-party cookies. This will hardly stop other tracking scripts, tracking images, widely-used fingerprinting techniques and related js calls. So nothing remotely close to even Brave let alone a TOR or the Epic Privacy Browser.

The cynic in me sees this as cutting off Google, and then tracking within the browser so they become the source of cross-internet tracking. I'd be on the lookout for any new 'personalization' feature that comes in to the browser. E.g. WWDC 2018: 'Today we're happy to announce Siri integration with safari! She will provide personalized recommendations and results by applying machine learning to your documents and data!'

As in 192.168.0.2o7.net. Remember, "SWF" stands for Small Web File. Yes, they actually tried to get users to swallow this when Shockwave Flash started to be used in devious ways, such as to track users.

Omniture's business is third party tracking cookies similar to Google Analytics or KISSmetrics. Not sure and don't care whether Flash is used so much anymore. If too young to rememeber search and ye shall find information about "permanent, Flash cookies" that could not be removed.

Apple is not saying "We will not engage with companies selling third party tracking cookie services." Clearly they are not opposed to third party tracking cookies in principle.

Instead they are announcing some change to their browser. Wow, exciting. It is not clear what exactly this announcement accomplishes for users. Probably nothing. If you are trying to avoid ads and tracking, popular browsers (without extensions, etc.) are not your friends.

It says a lot about the state of the web that both Apple and Google are looking at publishers and saying "Look, if you won't fix your websites, we'll fix them for you" (Google in the form of AMP on mobile devices). However, as one of those who subscribes to the opinion that AMP breaks the web, I greatly prefer Apple's approach.

It makes me wonder how many publishers at national newspapers and magazines are even aware of whats going on.

Sorry, but if a junior dev can blow away your prod database by running a script on his _local_ dev environment while following your documentation, you have no one to blame but yourself. Why is your prod database even reachable from his local env? What does the rest of your security look like? Swiss cheese I bet.

The CTO further demonstrates his ineptitude by firing the junior dev. Apparently he never heard the famous IBM story, and will surely live to repeat his mistakes:

After an employee made a mistake that cost the company $10 million, he walked into the office of Tom Watson, the C.E.O., expecting to get fired. Fire you? Mr. Watson asked. I just spent $10 million educating you.

I was on a production DB once, and ran SHOW FULL PROCESSLIST, and saw "delete from events" had been running for 4 seconds. I killed the query, and set up that processlist command to run ever 2 seconds. Sure enough, the delete kept reappearing shortly after I killed it. I wasn't on a laptop, but I knew the culprit was somewhere on my floor of the building, so I grabbed our HR woman who was walking by and told her to watch the query window, and if she saw delete, I showed her how to kill the process. Then I ran out and searched office to office until I found the culprit -

Our CTO thought he was on his local dev box, and was frustrated that "something" was keeping him from clearing out his testing DB.

Did I get a medal for that? No. Nobody wanted to talk about it ever again.

Lots of folks here are saying they should have fired the CTO or the DBA or the person who wrote the doc instead of the new dev. Let me offer a counter point. Not that it will happen here ;)

They should have run a post mortem. The idea behind it should be to understand the processes that led to a situation where this incident could happen. Gather stories, understand how things came to be.

With this information, folks can then address the issues. Maybe it shows that there is a serially incompetent individual who needs to be let go. Or maybe it shows a house of cards with each card placement making sense at the time and it is time for new, better processes and an audit of other systems.

The point being is that this is a massive learning opportunity for all those involved. The dev should not have been fired. The CTO should not have lost his shit. The DB should have regularly tested back ups. Permissions and access needs to be updated. Docs should be updated to not have sensitive information. The dev does need to contact the company to arrange surrender of the laptop. The dev should document everything just in case. The dev should have a beer with friends and relax for the weekend and get back on the job hunt next week. Later, laugh and tell of the time you destroyed prod on your first day (and what you learned from it).

>They put full access plaintext credentials for their production database in their tutorial documentation

WHAT THE HELL. Wow. I'd be shocked at that sort of thing being written out in a non-secure setting, like, anywhere, at all, never mind in freaking documentation. Making sure examples in documentation are never real and will hard fail if anyone tries to use them directly is not some new idea, heck there's an entire IETF RFC (#2606) devoted to reserving TLDs specifically for testing and example usage. Just mind blowing, and yeah there are plenty of WTFs there that have already been commented on in terms of backups, general authentication, etc. But even above all that, if those credentials had full access then "merely" having their entire db deleted might even have been a good case scenario vs having the entire thing stolen which seems quite likely if their auth is nothing more then a name/pass and they're letting credentials float around like that.

It's a real bummer this guy had such an utterly awful first day on a first job, particularly since he said he engaged in a huge move and sunk quite a bit of personal capital from the sound of it in taking that job. At the same time that sounds like a pretty shocking place to work and it might have taught a ton of bad habits. I don't think it's salvageable but I'm not even sure he should try, they likely had every right to fire him but threatening him at all with "legal" for that is very unprofessional and dickish. I hope he'll be able to bounce back and actually end up in a much better position a decade down the line, having some unusually strong caution and extra care baked into him at a very junior level.

Plot twist: CTO or senior staff needed to cover something up (maybe a previous loss of critical business data) and arranged for this travesty to likely happen permitted sufficient number of junior devs went through "local db setup guide" mockery of a doc.

Either that or this is a "Worst fuckup on the first day on job" fantasy piece - I refuse to acknowledge living in the world where alternatives have any meaningful non-zero probability of occurring.

Lots of people in the thread are commenting how surprised they are that a junior dev has access to production db. Both jobs I've had since graduating gave me more or less complete access to production systems from day one. I think in startup land - where devops takes a back seat to product - it's probably very common.

People will screw up, so you have to do simple things to make screwing up hard. The production credentials should never have been in the document. Letting a junior have prod level access is not that far out of the normal in a small startup environment, but don't make them part of the setup guide. Sounds like they also have backup issues, which points to overall poor devops knowledge.

Not part of this story, but another pet peeve of mine is when I see scripts checking strings like "if env = 'test' else <runs against prod>". This sets up another WTF situation if someone typos 'test' now the script hits prod.

Yeah, another case of "blame the person" instead of "blame the lack of systems". A while back, there was a thread here on how Amazon handled their s3 outage, caused by a devops typo. They didn't blame the DevOp guy, and instead beefed up their tooling.

I wonder whether that single difference - blame the person vs fix the system/tools predicts the failure or success of an enterprise?

Assuming the details are correct, this should be considered a win by the junior dev. It only took a day to realize that this is a company he really, really doesn't want to try to learn his profession at.

Guaranteed the CTO is busily rewriting the developer quide and excising all production DB credentials from the docs so that he can pretend they were never there. While the new guy's mistake was unfortunate in a very small way, the errors made by the CTO and his team were unfortunate in a very big way. The vague threat of legal action is laughable, and the reaction of firing the junior dev who stumbled into their swamp of incompetency on his first day speaks volumes about the quality or the organization and the people who run it. My advice... learn something from the mistake, but otherwise walk away from that joint and never look back. It was a lucky thing that you found out what a mess they are on day 1.

No disrespect to the OP but this sounds pretty fake. If the database in question was important enough to fire someone immediately over then there wouldn't have been the creds floating around on an onboarding pdf. And involving legal? Has anyone here heard of anything similar? I'm just 1 datapoint but I know I haven't.

For some years now I've stopped bothering with database passwords. If technically required I just make them the same as the username (or the database name, or all three the same if possible). Why? Because the security offered by such passwords is invariably a fiction in practice, I've never seen an org where they couldn't be dug out of docs or a wiki or test code. Instead database access should be enforced by network architecture: the production database can only be accessed by the production applications, running in the production LAN/VPC. With this setup no amount of accidental (or malicious) commands run by anyone from their local machine (or any other non production environment) could possibly damage the production data.

It's not the CTO's fault. It's the document's fault! We should never have documentation again, this is what it has done to us! We need to revert to tribal knowledge to protect ourselves. If we didn't document these values, people wouldn't be pasting them in places they shouldn't be!

Side question, as a dev with zero previous ops experience, now the solo techie for a small company and learning ops on the fly, we're obviously in the situation where "all devs have direct, easy access to prod", since I'm the only dev. What steps should I take before bringing on a junior dev?

Several years back I worked as a DBA at a managed database services company, and something very similar happened to one of our customers who ran a fairly successful startup. When we first onboarded them I strongly recommended that the first thing we do is get their DB backups happening on a fixed schedule, rather than an ad-hoc basis, as their last backup was several months old. The CEO shuts me down, and instead insists that we focus on finding a subtle bug (you can't nest transactions in MySQL) in one of their massive stored procedures.

It turns out their production and QA database instances shared the same credentials, and one day somebody pointed a script that initializes the QA instances (truncate all tables, insert some dummy data) at the production master. Those TRUNCATE TABLE statements replicated to all their DB replicas, and within a few minutes their entire production DB cluster was completely hosed.

Their data thankfully still existed inside the InnoDB files on disk, but all the organizational metadata data was gone. I spent a week of 12 hour days working with folks from Percona to recover the data from the ibdata files. The old backup was of no use to us since it was several months old, but it was helpful in that it provided us a mapping of the old table names to their InnoDB tablespace ids, a mapping destroyed by the TRUNCATE TABLE statements.

One of the questions I asked my manager during the interview process was how did he feel about mistakes?

I knew I was being brought in to rearchitect the entire development process for an IT department and that I would make architectural mistakes no matter how careful I was and that I would probably make mistakes that would have to be explained to CxOs.

I did the same thing early on in my career. Shut down several major ski-resorts in Sweden for an entire day during booking season by doing what we always did, running untested code in production to put out fires. Luckily, my company and our customers took that as a cue to tighten up the procedures instead of finding someone to blame. I hear this is how it works in aviation as well, no one gets blamed for mistakes since that only prevents them from being dealt with properly. Most of us are humans, humans make mistakes. The goal is to minimize the risk for mistakes.

I worked with someone who did this, early in my career. His bacon was saved by the fact that a backup had happened very soon before his mistake.

His was worse though, because he had specifically written a script to nuke all the data in the DB, intending it for test DBs of course. But after all that work, he was careless and ran it against the live DB.

It was actually kind of enlightening to watch, because he was considered the "genius" or whatever of my cohort. To wit, there are different kinds of intelligence.

Technical infrastructure is often the ultimate in hostile work environments. Every edge is sharp, and great fire-breathing dragons hide in the most innocuous of places. If it's your shop, then you are going to have a basic understanding of the safety pitfalls, but you're going to have no clue as to the true severity of the situation.

If you introduce a junior dev into this environment, then it's him that is going to discover those pitfalls, in the most catastrophic ways possible. But even experienced developers can blunder into pitfalls. At least twice I've accidentally deployed to production, or otherwise ran a powerful command intended to be used in a development environment on production.

Each time, I carefully analyze the steps that led up to running that command and implemented safety checks to keep that from happening again. I put all of my configuration into a single environment file so I see with a glance the state of my shop. I make little tweaks to the project all the time to maintain this, which can be difficult because the three devs on the project work in slightly different ways and the codebase has to be able to accommodate all of us.

While this is all well and good, my project has a positively decadent level of funding. I can lavish all the time I want in making my shop nice and pretty and safe.

A growing business concern can not afford to hire junior devs fresh out of code school / college. That's the real problem here. Not the CTO's incompetence, any new-ish CTO in a startup is going to be incompetent.

Cool story but I think this is fake. Since there are 40 people in the company, it seems like at least a few people before him followed the onboarding instruction. I just don't believe that there would be that many people that a) didn't do the same thing he did or b) change the document.

I destroyed an accounting database at a company during a high school summer job.

A mentor was supervising me and continually told me to work slower but I was doing great performing some minor maintenance on a Clipper application and didn't even need his "stupid" help ... until I typed 'del .db' instead of 'del .bak'. Oooops!

Luckily the woman whose computer we were working on clicked 'Backup my data' every single day before going home, bless her heart, and we could copy the database back from a backup folder. A 16 year old me was left utterly embarrassed and cured of flaunting his 1337 skillz.

From Sr dev/lead dev, dev manager, architect, ops stack, all the directors, A/S/VPs, and finally the CTO. You could even blame the CEO for not knowing how to manage or qualify a CTO. Even more embarrassing is if your company is a tech company.

I think a proper due diligence would find the fault in the existing company.

It is not secure to give production access and passwords to a junior dev. And if you do, you put controls in place. I think if there is insurance in place some of the requirements would have to be reasonable access controls.

This company might find itself sued by customers for their prior and obviously premeditated negligence from lack of access controls (the doc, the fact they told you 'how' to handle the doc).

But the junior dev is not fully innocent either: he should have been careful about following instructions.

For extra points (to prove that he is a good developer) - he should have caught that screw up with username/passwords in the instruction. Here's approximate line of reasoning:

---

What is that username in the instruction responsible for? For production environment? Then what would happen if I actually run my setup script in production environment? Production database would be wiped? Shouldn't we update setup instruction and some other practices in order to make sure it would not happen by accident?).

---

But he it is very unlikely that this junior dev would be legally responsible for the screw up.

After adding up the number of egerious errors made by the company, I'd almost be inclined to say the employee has grounds for wrongful termination or at least fraudulent representation to recoup moving expenses.

Even startups have contracts with their customers about protecting the customer's data. If it is consumer data, there are even stricter privacy laws.Leaving the production database password lying around in plain text is probably explicitly prohibited by the contracts, and certainly by privacy laws.The CTO should pay him for the rest of the year and give him a great reference for his next job, in return for him to never, ever, ever, tell anyone where he found the production password.

I'm surprised a junior dev on his first day isn't buddied up with an existing team member.

In my line of work, an existing employee whoTransferred from another location would probably be thrown in at the deep end but someone who is new would spend some time working alongside someone who is friendly and knowledgable. This seems the decent thing to do as humans.

Yeah this infra/config management sounds like land-mine / time bomb incompetence territory. You just were the unlucky one to trigger it. Luckily this gives you an opportunity to work elsewhere and hopefully be in a better place to learn some good practices - which is really what you're after as a junior dev anyway.

Repeat after me, while clicking your heels together three times, "It is not my fault. It is not my fault. It is not my fault." It was obvious as I read your account that you would be fired. A company that allowed this scenario to unfold would not understand that is was their fault.

Everybody agrees that the instructions shouldn't have even had credentials for the production database, and the lion's share of the blame goes to whoever was responsible for that.

There is still a valuable lesson for the developer here though - double check everything, and don't screw up. Over the course of a programming career, there will be times when you're operating directly on a production environment, and one misstep can spell disaster - meaning you need to follow plans and instructions precisely.

Setting up your development environment on your first day shouldn't be one of those times, but those times do exist. Over the course of a job or career at a stable company, it's generally not the "rockstar" developers and risk-takers that ahead, it's the slow and steady people that take the extra time and never mess up.

Although firing this guy seems really harsh, especially as he had just moved and everything, the thought process of the company was probably not so much that he messed up the database that day, but that they'd never be able to trust him with actual production work down the line.

It was the second day, and I only wiped out a column from a table, but it was enough to bring business for several hundred people down for a few hours. It was embarrassing all round really. Live and learn though - at least I didn't get fired!

I would suggest you, once this sorted out, to publicly mention the company name so no other Engineer will fail in this trap again. This will be lesson for them to properly follow basic practices for data storage.

I would assume this was mocked to test if the intern could follow simple instructions, to provide a lecture for the huge consequences of small mistakes and to have a viable reason to fire consequently; but I'm wearing my tin foil hat right now, too.

It is really unfair to have fired him. The OP is not the one that sould have been fired. The guy in charge of the db should be fired and the manager who fired the OP should be fired too. And, by the way, the guy in charge of the backups too.

When I worked for SAP back in 2007 (I was a fresh grad at the time), I was working in the business intelligence (reporting, analytics, and data warehousing) group and noticed how cumbersome it was for organizations to simply create and view reports (we're talking millions of dollars). I once said to my boss "you realize that in the future we'll simply just write 'show me a line graph for sales in the northeast'".

I played around with this the other day. I have a spreadsheet with a bunch of columns. It wasn't immediately obvious how to use the explore feature intuitively. It graphed data but not really the ones I wanted. I was also hampered by it using only about 200 pixels on the right side of the screen.

I started typing in a question but it couldn't guess what I was interested in. YMMV. Perhaps with a fairly simple spreadsheet you can intuit things? Back 10 years ago I built a google spreadsheet competitor called Numbler (well, I didn't know if was a competitor, google sheets came out a couple of months later). But one of the things I learned is that people use spreadsheets for just about everything, and it can be in the wierdest format.

Can we talk about getting data into Google Sheets? Is there a standard way to build a pipe from, say, a reporting database to dump aggregates into Google Sheets?

I built a private Add-on for my company that surfaces specific aggregates as Sheets functions (i.e. getSalesByDay(...)) and I have found so many bugs with that whole ecosystem. Deploys are completely manual and require copy-paste, you can't reliably tell what version is being invoked in a sheet, invisible cell-level caching that caches error state, concurrency limits that are too low and impossible to work around, and more. It all kinda sorta works but Google doesn't make it easy.

I wish that Google would take the same sort of "embed" idea further in G-Suite. I find it amazing that I can't (as far I know) reference slides from another deck in Google Slides. The use case would be putting together a series of "core" slides that are updated across your organization as they change. Given the web nature of G-Suite, this, to me, would seem like a no brainer.

Also, inserting charts from Google Sheets into Google Presentations looks pretty terrible. I often revert to Excel because the charting is fair superior imho (though just as challenging to wrangle).

They are solving a problem that doesn't really exist, the challenge is not the last step of a data report, it's the steps involved in the beginning, getting good data in, formatting, joining multiple sources, automation, dealing with junk data, procedures,etc.

I don't understand the example, what's the difference between typing "Show me a line graph" and clicking a button in excel that does the same thing.

Oh, wow. I love where this is headed. Spreadsheets are one of the most abused products in a normal business--used for everything, and then some poor excel jockey ends up being forced to create a semblance of order from the chaos.

Well charts is a good addon but just wanted to understand how they are able to do this ... i mean Machine Learning part , for example if somebody asks "Show me sales of X product in last year" , from machine learning perspective how this gets interpreted in actual SQL query ..

I'm wondering how Microsoft is responding to this. Do they expect their current Excel dominance to continue despite competitors constantly catching up to feature parity and even extra goodies, like this one?

I wonder if we will see more software including query based input like their charts, and what sort of speed improvement we could see? At first I was not excited to type something where I could click a couple buttons, but then I recognized the other enhancements such as applying a filter right away.

I'm not convinced it's better just because it has machine learning on the back end, but if excel would learn how I want my graphs made from how I manually adjust the graphs (adding axis labels and a title, color preferences, never a 3d bar or pie chart), that'd be a nice enhancement. I'm sure there's a setting, but I haven't searched for it.

There's a lot of basic stuff like column titles, moving columns about, filtering, search that I found had a quite a learning curve with sheets. I built and use this instead. Bell+Cat https://bellpluscat.com

This sort of thing it's easy to get to 80% but good luck getting that last 20% without formalisms. Might be useful for getting a quick feel for a data set to confirm some intuitions, but not really useful beyond that.

does anyone know how this kind of stuff gets built ? I'm considering a spreadsheet-y internal admin dashboard for my startup. I was looking at https://github.com/JoshData/jot to be able to sync stuff on the client side to the server.

has anyone worked on something like this ? the big challenge is synchronization - between server and multiple clients - while being able to offload a lot of computations on to the client.

I wonder how is the security built ? if i maliciously change the formulas in my browser.. will the backend datastore still accept the data ?

If you click a search result and end up seeing something completely different than what you expected based on the search result snippet, it shouldn't matter if you're the WSJ or a scam site trying to hack your Google rank. It's deceiving the user and inflating your search ranking at the expense of more deserving listings.

Watching this very closely, I pay for a WSJ subscription because I think their content is better than most, and also because I get sent alot of links to their content. Something about this later point feels like the argument people make about using Office because people still send them Excel and Word docs.

Similar to how software companies release free software to augment what makes them money, Bloomberg is able to spend a lot of money on producing content that is sponsored by their terminal subscriptions.

The WSJ might be in a unique situation where their primary audience will pay, often due to companies footing the bill for employee's, so perhaps they can be one of the few news producing companies that doesn't have to depend on Google for traffic in that their primary audience loads up their front page multiple times a day just to see what's there.

I wouldn't be surprised if they did a deal with Bloomberg to provide their content on terminals to further strengthen their ties to their core audience.

>The Journal decided to stop letting people read articles free from Google after discovering nearly 1 million people each month were abusing the three-article limit. They would copy and paste Journal headlines into Google and read the articles for free, then clear their cookies to reset the meter and read more, Watford said.

After the harder paywall, what's the best guess of the percentage of those google-copy-pasters will convert[1] to subscribers paying $278 or $296.94 or $308.91 per year? My guess is less than 1/10th of 1%. I assume the vast majority of the 1 million are casual readers who don't have $300 discretionary income to splurge on a subscription. If they can't read for free with a workaround, they'll do without it.

In related trivia, I just read that The Economist's strategy is to allow the google-copy-pasters.

I'm not judging either company as right. It's interesting they go about it differently.

As a climber, there are very few people that I trust to have a more useful opinion on all this than Tommy Caldwell, a close friend and long time climbing partner of Honnold. It's so out there for most people that most jump to conclusions without proper knowledge of the subject.

Caldwell is in an interesting position of having to balance supporting his friend, and trying to get over the fact that he very well could die doing these attempts. His article does a great job expressing this.

If you are interested in this or the history of climbing in Yosemite and and El Capitan I highly recommend the documentary "Valley Uprising." Even if you aren't necessarily interested in climbing its a beautiful documentary. It's available on Netflix.

The thing that I've most loved while following Alex's exploits over the past few year is how he talks about his mental preparation. He always seems very well prepared for whatever route that he is climbing. To me, the biggest evidence of that is the fact that he quit doing this exact climb a few weeks ago because he felt conditions weren't right. That's really hard to do with media, etc. on your tail, even IF your life is literally at stake.

It would be one thing if he were just incredibly bold and daring and were getting away with it; instead, its clear that his method is a very slow, methodical process in which he manages to practically guarantee that he will have a safe and effortless climb. Even in the interviews after this, it is clear that he is committed to his routine and managed to set-up this climb in such a way that it simply represented a comfortable, natural step in his evolution as a climber. He talks about it almost matter-of-factly.

A couple of years ago it was in the news that a couple a blokes (Caldwell & Jorgeson) had freeclimbed El Capitan.

So for those of us who know next to nothing about rock climbing, what's the difference here? Apparently Caldwell & Jorgeson were using ropes for safety although not for the climbing per se (hence why it was freeclimbing?) So this guy does it all alone, without any safety ropes, and in frickin 4 hours? Waaaat?

Or was it a different route? The Caldwell & Jorgeson stories mention "Dawn Wall", is that something else than Honnold climbed now?

achievements in climbing can be kind of difficult for non-climbers to see the relevancy of, since the conventions of success seem a little arbitrary until you put some time into it. it was interesting to see the news cycle pick up tommy caldwell and kevin jorgeson's ascent of the dawn wall in yosemite in january 2015. it was certainly the biggest thing to happen in yosemite at the time, but no bigger in terms of its relevancy to climbing than a handful of other ascents that happened in the few years surrounding. it was a huge climb, and worthy of all the attention it got, but it was a little peculiar to see it get more airtime than any climb since maybe the original dawn wall ascent by warren harding in 1970.

before alex's climb this week, it would be totally reasonable to make the claim that el cap will never get free-soloed. it's too sustained, the only feasible routes are too insecure. no one, expert or not, would ever get shouted down for making that claim, even among a cohort of dreamers who all want to live the impossible. among that cohort, free solo climbing isn't all that common; maybe one in a hundred climbers have ever climbed a difficult route taller than 100 feet without a rope. which makes him alien even within his sport.

honnold just landed on the moon. what he did doesn't require any of the qwerks of convention that accompany most big-wall free climbs. everyone immediately understands the idea of scaling a cliff without a rope. everyone can even try it. el cap is a ten minute walk from the car. but in case the context of the climb is unclear, this is the kind of feat that only comes along every few generations.

maybe I'm overstating it. from one perspective, this climb was another incremental step on honnold's journey. all of his previous ascents were mind-bending as well: moonlight buttress in zion, the regular northwest face of half dome, el sendero luminoso in el potrero chico, mexico. besides, technical rock climbing as we understand it today is only two or three generations old at most, and it's already produced this monster of an achievement. we may see more in our lifetime. I just wouldn't bet on it.

This looks fantastic. If I had a VR device I would get this immediately. I've always had a fascination with trying to grok higher dimensions. I think it's just about impossible to have an intuitive understanding of it - 3D spacial reasoning is in our wiring through both nature and experience.

You know the theory of how language shapes your thinking? For example, in societies where there is no separate word for orange and red, they have extreme trouble telling the difference between them. In some native tribe where they use cardinal directions (North, South) - not relative (Left, Right) - they have an almost supernatural ability to know which direction they are without needing any other cues (sunlight, stars).

Point being - would being able to completely think in four dimensions have an impact on how you understand the world?

I'm curious how this would look if the 4D space was projected onto the 3D space instead of taking a cross-section, much like we already project 3D space onto 2D space (your display), to create "3d" graphics.

If you make a Kickstarter to 4D print them, I would support it for my kids. I think for kids, it's more important to play with real-world physical objects rather than their virtual computer representation.

As a person who studied knot theory and sitting through other peoples presentations about higher dimensional knots this looks like a neat treat! After hearing about the concept I bought the game and tried it out. I like how you replace actual physical actions to objects in 4 dimentions. Usually this is projected on the time axis but with this interface it makes it much more fun to play with it instead of having basically a generic slider.

Now it's a cool toy, but there might be more practical applications for a 4D rigid body physics engine. Some materials design approaches[0] involve iterating through shape space to determine what shape a particle should be to get it to assemble into a desired structure. A 4D physics engine might be useful for this shape space iteration, as movement through shape space could be accomplished by moving a 4d rigid body along the 4th axis.

Easily the hardest part of learning about string theory for me (via reading "The Elegant Universe" [1]) was grasping the idea of multiple other dimensions.

The book tried it's best to explain it by exploring a world starting with 1D and evolving to 3D, but it's still quite difficult to visualize, especially ones shaped like a "CalabiYau manifold" [2].

The one good thing I got out of learning about Calabi-Yau manifolds (and randomly reading another layman story involving Yau's clash with the guy who solved Poincar conjecture) was a new interest in learning more about math and a getting a laymans grasp of topology. Although I later learned manifolds are quite an advanced subset of topology.

I enjoyed the linked video, I was looking for a way to better understand 4+D in a way I could wrap my head around and an interactive game makes a lot of sense.

This is so cool, but its driving me crazy. I was wondering if someone could provide me with more resources that help intuit about 4d space. For example, in Miegekure, he walks through the 4th dimension to get to the other side of the wall, but thats assuming that no part of the wall extends into the forth dimension (aside from rubble). What would happen if he switched back to the normal 3 dimensions in the middle of the wall. In miegekure everything is kind of discretized (grassy area to desert area), but in reality that would be continuous. What would that actually look like, for example, the area right next to the wall? How would this work at a subatomic level, would electrons be traveling in and out of the 4th dimension? Could this explain things like action at a distance or black holes? How does the explain shared surfaces in the 4th dimension? I can't even answer a basic question like, if I were sitting in an easy chair and started looking down the 4th dimension what would happen. Since it has to share one cross section of the easy chair would it have to be simply a fatter or skinnier easy chair? But that is true for any cross section of the chair correct? The easy chair is the 3d cross section of the 4d object then what (would/could) the 3d cross section exchanging one of our spatial dimensions for the hidden one look like? How does gravity work in those 3 dimensions (2 of our spatial dimensions + 1 of the hidden dimension). Supposing the world was like this, wouldn't it be obvious if any object was extending into the 4th dimension thus we can stand to reason our world must be strictly 3 dimensional? If there were 4 dimensions since we can't see or interact with it, does it stand to reason that the spatial extent of any object doesn't extend into the 4th dimension?For example, since the three dimensional projection of a hypersphere changes diameter, does that mean the 4d dimensional analogues of earth are just different size earths?I also notice that in one of the miegekure videos the windmill in the new 3d space is like a cross section of the windmill but extending for a distance, Im guessing this is a product of the way the 4th dimension is discretized but Im wondering, what would that really look like if the game weren't made that way?

It's a shame it's only for iOS and Vive. I wonder how difficult it would be to make an open source desktop/browser version? Even if it's a lot simpler, it would be neat to feel what it'd be like to play around in 4 dimensions.

What annoys me most: "thing disappear" I don't recall 3D -> 2D mapping making things disappear, just surfaces hiding other surfaces. But this might not work in 4D?

With the 2D->3D they are taking cross-section, I really don't like these. Just throw it all on there ! This would also mean you project your 4D world on a 3D camera, you project on a 2D surface to display.

This and Miegekure show cross-sections or projections of a 4d space that are common but that show details that could not possibly be seen by functional 4D eyes. Are there any attempts out there at showing a 3D representation of what 4D eyes could see? (For example, if you have a solid 4D cube, you can only see the outside of it, but the cross-section shows parts of the inside, as happens you take 2D cross-sections of a 3D cube)

One of the first things I thought of when I got a vive is building an app to let one intuitively navigate and understand four dimensional space. I never had the time or talent to hack something together though so I'm glad this exists.

There are several 4D games, but the one I found gave me the best grasp of the 4D world is this one [1], a puzzle in which you manipulate a hypercube. You first play in 2D and 3D before going to 4D. You end up with an intuitive understanding of 4D.

I can't get this to run on iOS. Is it supposed to work? It just sits at the splash page playing a slightly interactive animation over and over. There's an arrow that I tried tapping and dragging and nothing happens.

Parts of the theory of disruption - specifically that an entity with less entrenched structures, can solve problems in more efficient ways - applies to nation states as well.

When 25% of your country has no electricity at all, you get to imagine parts of your grid from the ground up. If there's an expectation of load-shedding and your grid doesn't have to be at a 100% in all places, you have room to make mistakes. Most importantly, if the renewables aren't replacing but rather adding to your energy generation capacity, you don't have to fight entrenched fossil fuel businesses and associated regulatory capture to get started.

In many ways it's similar to how telephony spread in the developing world. They skipped landlines entirely and went straight to GSM.

I'm terribly excited about renewables in parts of Africa and the rural areas in South Asia. My family is originally from Bangladesh. The local grid is so unreliable in rural areas and Chinese solar equipment so cheap that most of my family members who live in villages just bought a solar installation instead of waiting around for the utilities to run wiring.

The current narrative is that India desperately needs coal and our coal is the cleanest, so we need to dig it up [1] and sell it to them to help fight climate change, because otherwise they're going to get 'dirtier' coal from somewhere else. If you disagree, then you believe that Indians don't deserve electricity.

And this isn't a strawman, this is almost verbatim what's being said in Parliament. The cognitive dissonance with our current PM is strong.

I am a young Indian. I have a dream . The dream that we are running electric vehicles. We have banned plastic everywhere. We have stopped burning our garbage. We have improved air quality. I think we can start with auto-rikshaws and public transport. Auto rickshaw constitute to a lot of traffic and pollution in the cities. We could replace them with smart electric ones. They move around only in cities so it won't be a big problem to build charging infrastructure. Then we could replace buses and increase their number. Ofcourse we need a lot of subsidies and incentives to make it happen. But it is POSSIBLE. but I need to build billion dollars companies ,sell them . Then use all that money to build this dream. I think someone did something like this in USA right? ;)

India is doing pretty well in green energy on the whole! India has the 4th largest energy generator from wind in the world. India has the largest solar power plant in the world (Kamuthi in Tamil Nadu). I at least hope that India is not inclined to return to coal with the US's exit from the Paris accord.

The point it tries to make is that - the demand exists, the challenge is the govt and the policy makers.

One other point made in the same video is at the end by Prime Minister Modi - we need tech transfer from the US. Solar and nuclear . India is probably the only non-NPT country to be authorised by the US Congress for civilian nuclear technology sale.

But all bets are off now. One does not know how the current regime will operate due the "coal is best" rhetoric.

Of course, Tesla might still move to India given the repartee between Anand Mahindra (of Mahindra motors) and Elon Musk yesterday!

Last I checked, the reason India's coal power plants were running at 60% was not because of lack of demand, but because fixed prices make it uneconomical to produce coal and the government monopoly Coal India is utterly incompetent and inefficient even by Indian government standards...

Yeah unless India influences corrupt politicians of neighboring countries to make coal based power plants in neighbouring countries and imports electricity from those plants. Search "Rampal Power Plant", "greatest mangrove forest of the planet" etc. Also please see: https://www.washingtonpost.com/news/energy-environment/wp/20...

One thing I learned on a trip to Kerala is that most people in the region actually burn their garbage - the government doesn't provide disposal service. While I'm sure the particulate and chemical emissions of this practice are awful, I'm curious about this carbon implications.

I wonder why NYT conveniently chose to ignore the previous US administration's WTO complaint on India's local component requirements for solar panels and it's subsequent verdict against India.

India required US solar panel manufacturers to source cells from locally, which was challenged by US in WTO and subsequently awarded in it's favour. So it's not just India is moving forward with green energy in spite of 'unfair-share' in global climate policies but also moving against the hurdles imposed specifically targeting it's green energy movements.

As an Indian, I'm really proud to read this. I'm hoping that they strategize this well and execute it well. India can easily harness wind and solar energy, create an really strong energy surplus and supply it to neighboring countries.

The whole region can benefit. India can be a beacon of peace and use it to stabilize the region and become a leader.

I know this is completely beyond the topic, and only relevant in regards to the website, but, as someone who has been convulsively clicking text while reading articles his whole life, this site design is absolute garbage. When i double click text I do not expect the site to interact. Double click in my world is known for highlighting. Not increasing font size. Sorry, end rant.

Green doesn't automatically mean better, and dirty does not mean coal and fossil fuels. It is how the technology is used. He argues alternative energy leads to deforestation, destruction of habitats, and deaths from vegetable oil which otherwise could be used to prevent some starvation.If you accept these things then alternative energy becomes extremely selfish by putting the burden out of your cities.

The arrest warrant says nothing about printer dots, actually. It says that once they saw it was printed (per the Intercept showing them a copy to confirm its legitimacy) they simply looked at who'd printed the original document. Upon looking into the desk computers of those 6 people, she was the only person who'd had email contact with the Intercept.

They didn't even need the yellow dots. She literally emailed the Intercept from her work email and was one of a trivial number of people who'd printed it in the first place.

"FBI special agent Justin Garrick told a federal court that Winner a cross-fit fan who graduated high school in 2011 and was in the US Air Force apparently as a linguist confessed to reading and printing out the document, despite having no permission to do so. "

So, she joined the company 3 months prior, and it was 'permission' rather than enforced access rights that they relied on for new trainees not to color outside of the lines.

According to the FBI arrest affidavit, only six people printed that document, and she emailed The Intercept from her own work computer.

So she would have been identified even if she or The Intercept had the sense to remove or alter the DocuColor dots.

"The U.S. Government Agency conducted an internal audit to determine who accessed the intelligence reporting since its publication. The U.S. Government Agency determined that six individuals printed this reporting. WINNER was one of these six individuals. A further audit of the six individuals' desk computers revealed that WINNER had e-mail contact with the News Outlet. The audit did not reveal that any of the other individuals had e-mail contact with the News Outlet"

The arstechnica article[1] reports, based on the FBI document, that the NSA determined who leaked the info by finding creases in the documents provided to them for authentication by the Intercept demonstrating that they were leaked by being printed out.

I remember a HN thread years ago on these yellow dots watermarks, where an employee at a printer manufacturer said there was no indication this was ever used by law enforcement to track who printed what because, for one, the team who implemented the watermarking never documented or taught anyone how to decode these watermarks.

Well, here we are today with this NSA story.

I think it's possible that US-based printer manufacturers implemented watermarking on special request from the NSA. That would also explain why the printer manufacturer employees never needed to teach anyone how to decode them. It wasn't their specs in the first place.

As someone else pointed out already there is no evidence the dots were used. Only 6 people viewed the document and she was the one who printed it. Then they found logs of her emailing it from her work computer.

So there are definitely printer dots in the posted images, but how do we know they are from a printer at NSA? They could be from a printer at The Intercept, a public copy and print shop, or anywhere else, intentionally left in as a red herring.

Of course, as others have posted, she doesn't appear to have tried hard to cover her tracks at NSA so that doesn't seem too likely. But stating that she accidentally left in the printer dots is assuming several facts not in evidence.

tl;dr: the dots may have exposed metadata of the printing, but from what we know officially, NSA's internal access control system was all that was needed to argue probable cause against Reality Winner.

So the dots don't look good in terms of The Intercept's opsec, but from what we know from the Justice Department's affidavit [0] and the search warrant [1], those dots were likely inconsequential as evidence compared to the audit trail that Winner left when she accessed and printed the file. It's not unreasonable to believe that the NSA and its contractors can track access activity by user, post-Snowden; I mean, it's a feature built into states' DMV systems, which is how cops get busted in the occasional scandal of unauthorized lookup of citizen info [2].

The warrant and affidavit allude to such a system when describing the audit that was done as soon as the NSA was made aware (because the Intercept reached out to them) that the document was out in the wild. At that point, it doesn't seem hard to query their own logs to find all users who accessed and/or printed out the document. Unfortunately for Winner, it seems that very few (1 in 6) NSA employees printed out the document, and I'm sure it didn't help that her background (former Air Force, fluent in several Middle Eastern languages) would indicate that her job did not require her to have a physical copy of this particular document.

The affidavit and warrant mention "physical" metadata that they say supports their case, but it's all circumstantial

1. The documents show evidence of creases/folding, which indicates that someone had to secret it out physically (i.e. they printed it first) from the NSA. But that folding/creasing could come from the reporters printing out their own copies of the document.

2. The affidavit says that of the 6 employees to have had printed out the document, Winner was the only one to have email contact with The intercept. But the warrant specifies that this email contact occurred using her private GMail address in March, and it was limited to 2 emails: her subscribing the The Intercept podcast, and a confirmation email. i.e. she didn't use email (that we know of) to talk to the Intercept.

There's no mention of the yellow dots, which, sure, we could argue that the NSA is just keeping that bit of tradecraft secret. But keep in mind that the NSA started their investigation last week, with the FBI interviewing Winner just a few days ago (on a Saturday no less).

The other key point is that, according to the warrant, the Intercept journalist sent along the leaked documents to a NSA source for confirmation using a smartphone, i.e. they texted smartphone photos of the documents. It seems possible that that kind of ad hoc scanning would make the yellow dots illegible, depending on how much care was taken to photograph the documents.

At any rate, it's kind of irrelevant. Assuming Winner used her own NSA credentials to peruse the system, the access control logs were all that were needed to out her as fast as the NSA and FBI were able to. However, it's worth noting that if the NSA had been clueless until the Intercept's published report, the actual published document apparently did reveal the yellow dots. This means that if even if Winner were one of many NSA employees to print out the documents, the yellow-dot timestamp would greatly help in narrowing the list of suspects.

So, it's wrong to say the Intercept outed her, because we don't know what would've happened in an alternative reality in which the NSA didn't start its investigation until after seeing the published report. It is OK, probably, to speculate that the Intercept was sloppy in handling the documents...but that's not what led to Winner being outed so quickly.

Something smells fishy here. How did the Intercept maintain enough opsec to stay in contact with Snowden (who would have dropped them like a hot potato if they didn't seem competent) and then do this, with the same general staff in place?

I've been watching a TV called "Halt and Catch Fire", about the early PC industry in the 1980's. I've enjoyed it very much, but sometimes I feel like the writers sacrifice historical plausibility to create strong female leads for a contemporary audience.

Ironically, so many of the GIANTS of computing's earliest days were female. Even at the rank-and-file level, women made up an astonishing number of early programmers. If you talk to retirement age people in our field, you'll find that mainframe developers were commonly female all through the 1960's and 1970's. It wasn't until the PC revolution that the field shifted to become more exclusively male.

I wonder when we'll see writers and TV/film producers start to explore that period of history? I'm sure there are some amazing stories that could be told. The crazy thing is, even if you just presented the field as-is without any embellishment, most people would assume that you were re-writing history in the name of political correctness. Most of the general public (hell, most young professionals in our field) just has no idea about this.

And before you get angry at me, answer this simple question: have you actually ever used COBOL? I spent a year of my life translating a COBOL mess into Delphi. It was horrible - the code I was working with had no functions (unless you think of a module defined in an entire file as a function), global scope on variables, and tons of ugly COBOL boilerplate, as dictated by the language.

And it's no surprise that COBOL was a historical dead-end. Unlike FORTRAN and Lisp, it begat nothing.

That's why I'm hard-pressed to venerate anyone that had anything to do with perpetuating that mess.

Cobol's legacy lives as PL/SQL, ABAP, and other enterprise data handling languages. While everybody is quick to point that this is not "real programming", they required astonishing amounts of engineering efforts to make things work.

I would love to see a decent, modernized COBOL -- the same way they have modernized Fortran, so it is now a pretty decent numerical computing language. It would be a great hit.

Although Ms. Sammet's passing has been previously discussed here on HN, this new NYT article provides a different and somewhat more colorful perspective on the life of this remarkable woman. Worth a read.

To like COBOL, you need to posess a particular kind of rigid corporate mindset. I wonder whether it's fathers/mothers had the mentioned physic structure, or it was a fruit of 50's salaryman mental subjugation. No offense ment.

"I think that, like species, languages will form evolutionary trees, with dead-ends branching off all over. We can see this happening already. Cobol, for all its sometime popularity, does not seem to have any intellectual descendants. It is an evolutionary dead-end-- a Neanderthal language."

I recommend anyone wanting to experiment with WASM to check out https://github.com/dcodeIO/webassembly which takes a lot of pain out of setting up the toolchain and lets you produce much leaner binaries as well.

I wonder how it would be, had capability-based system architectures taken off and become mainstream. I guess we wouldn't need WebAssembly to run untrusted code safely, because in such a system, all objects from whole programs to an object as small as the number 4 would be safe and sealed off from each other on a hardware level.

I'm reading Capability-Based Computer Systems [0] by Henry M. Levy (1984), motivated by wanting to learn about the Burroughs B5000 that Alan Kay has praised multiple times. I've only started to learn about these things and I don't understand the implications, but if I'm reading it correctly, such architectures would obviate the need for web-style safety measures such as process-sandboxing ala Chrome, shader sanitizing ala WebGL, etc, because everything in the system would be safe that way.

Remember when you could hit "View Source" to see how the web was built? I don't like where this is going. Minification was bad enough, now we're going to be getting more non-free blobs shoved into our browsers and this is being touted as a great new feature for us.

I'm trying to use asm.js to port Python to JS as a shared library, which would allow us to load and run arbitrary CPython modules (compiled for JS) in the browser. I generate code using emscripten, which is also able to generate WASM (which I'm not using at the moment though).

My experience so far:

Emscripten is quite mature and compiles even very complex C/C++ code without complaining. Statically linking code works fine, while dynamically loading shared libraries is still a bit problematic it seems.

With a BBC Micro (6502, 1MHz) I was able to achieve about the same FPS (I was genlocked for UK PAL TV transmission so either 25FPS or 50FPS) but only for a 40x25 grid, using 6502 assembler embedded in BASIC!

I wonder where the world is going with this. At first glance it looks like webassembly is a potential faster replacement for javascript in the browser. However, javascript is an increasingly popular language everywhere. I'm not sure people will want to move away from it for most development. The part that is actually causing performance problems with web application is the HTML/CSS/DOM layer which was not designed as a UI widget library but as a document rendering and styling framework.

Webassembly/Webgl standardization enables the creation high performance replacements for HTML/CSS/dom.

We might end up with applications that are still mostly written in javascript but that call into new webassembly/webgl ui frameworks for rendering instead of into the html/css/dom layer.

I actually went through all the steps of getting rustc and emscripten working, got it set up in Docker. It actually works, I was able to compile a rust program to wasm. But.. because emscripten uses a custom version of LLVM and clang, the image took up 25 GB of my harddrive! I just don't have that kind of space to leave it lying around, so I'll have to wait until they integrate into upstream I guess.

I might work a bit in Rust by itself and compile using emscripten later on the server, but kind of hard to do that if I want to access DOM/canvas/webGL, etc.

You can play my favorite arcade game ever in a wasm gameboy emulator (credit goes to Ben Smith of Google: https://github.com/binji/binjgb) in a web-based OS I've been working on for 5 years. This is known to work in current Chrome and Firefox. Keyboard game controls: 'w'=up 'a'=left 's'=down 'd'=right '.'=A ','=B space=start enter=select. Also, standard (17 button, 4 axis) USB gamepads should "just work" via plug'n'play.

The argument in the URL is the base64 encoding of an initialization script that is being passed to the desktop environment. Going to the link in current Edge should just open the "Arcade" app, with nothing else happening.

You should be able to drag-n-drop ".gb" or ".nes" ROM game files from your native desktop right onto the Arcade app window, and it should just start playing. You can also just drop them onto the web desktop to save them there, then just double-click them when you want to play. That way, the file is kept in the site's local storage.

As a provider of IaaS Cloud and of dedicated servers and colo, I hear this argument all the time. No one ever seems to include the Network Engineers, monitoring systems, the routers (better have more than 1!), the switches (distribution and access layers), the maintenance, software licenses (where applicable), customer support, cost of IP addresses, Account Payable, ARIN membership, RADB membership, cross-connects, optics, spares and/or support contracts, etc... and finally, you do not use a 1Mbps at 100% for 24hrs per day, so while 1Mbps for a month is ~320GB, in reality, the way most people transfer data, 320GB would look more like 3Mbps at 95th percentile (the way burstable bandwidth is billed)

A basic 1Gbps commit on a 10Gbps port in a data center might cost you from $0.50/Mbps (something like Cogent) to maybe $1.50/Mbps (let's say Level 3), other providers could be $4+/Mbps. By the time you factor in all of the above overhead costs, the true cost of the bandwidth is much much higher on a per Mbps basis.

Don't forget to significantly over-build your stuff, or you might get knocked off-line for anomalies or DoS attacks.

Admittedly, the scale of Google, AWS, Azure makes the cost per Mbps much much lower, but when as others have pointed out, AWS, Google, Azure don't need to charge less than they do.

The big cloud platforms offer a rich selection of different offerings, which (just like in every other industry) cross subsidize each other.

When I go to a restaurant, I don't expect that they will be making the same profit margin on every item on the final bill, and in fact, they almost never do. Drinks tend to have a very high profit margin, some labour intensive items may be a break even at best, and the complimentary bread sticks or chips and salsa (if offered) will certainly be a loss.

I guess I could write a very upset article about how my local mexican restaurant is SERIOUSLY SCREWING ME OVER with their drink prices, but if I don't write the companion piece about their cheap burritos (subsidized, of course, by the drink prices), it would only show half the picture.

The reality is that I'm buying a whole package (at AWS or a restaurant) and I should evaluate the whole picture. Yes, I can get bandwidth cheaper outside AWS (or a can of coke a lot cheaper from a big box retailer). But I can't really get the total package of integrated, managed services outside AWS (certainly not for the cost they charge), any more than I can get someone else to show up in my kitchen and cook a three piece meal and then do all the dishes. (Which is to say, I totally could hire a chef to do that, but it would cost me a lot more. I could BUILD an internal SQS clone if I had to, but my employer would never break even on the cost of getting me to do so.)

AWS is very cheap for some things and very expensive for others. Depending on your usage and workload it may or may not be economical to buy the package they offer. If it is, go for it. If not, don't. Just like, you know, every other good or service you purchase in both your personal and professional life.

"Amazon EC2, Microsoft Azure and Google Gloud Platform are all seriously screwing their customers over when it comes to bandwidth charges."

Disagree. There's no false advertising here, they're making you pay for their service and convenience of using a combined [Paas, Iaas, Saas ..etc]. It's unfair to view these services as a singular function, you typically touch MANY features/products in production. The cost includes the convenience of offering everything under one roof, because, face it, doing everything by yourself at the SLAs provided by the giants is no trivial task.

Unless you're a BIG company that likes to distract itself with infrastructure instead of building and sharpening the core offerings, chances are that you will NEVER really build anything as reliable, inter-operable, configurable and manageable at cost.

I have posted a few times about how absurdly expensive all the cloud providers are. If you have a baseline load you should be co-locating bare metal. Any excess capacity you need should spill over into your AWS/GCE/Azure account.

For example: A dedicated m4.16xLarge EC2 instance in AWS is $3987/month. You could build that same server for $15,000 through Dell, lease it at $400/month (OpEx), and colo it with a 1GB/s blended bandwidth connection billed at the 95th percentile for $150/month.

This analysis is way too oversimplified. It completely ignores the shape of the traffic (real apps have peaks and valleys of usage - they don't pump exactly 100 Mbps every second of every day). Cloud providers charge the same amount regardless of how bursty your app is, and they have to provision capacity so that all customers get good performance even under unusual spikes (the more spiky your traffic, the better a deal per-MB pricing is for you). And of course it ignores all the ancillary networking HW and SW that supports these services, and all the labor you save by not having to manage that stuff yourself.

I've analyzed the cost of cloud services to death (I've worked for a couple of them) and the only way they aren't great deals is if you don't need high quality operations (i.e. if you can deal with slow-downs or occasional outages then you can do better elsewhere). Otherwise, if you're small-scale then these marginal cost differences don't matter, and if you're larger scale then call up these cloud providers and get yourself a discount off the list price.

(Bandwidth is ambiguous in this context so I'll use "data transfers" instead)

I personally don't see the outrage. AmaGoogSoft overcharges for data transfers because they know they can get away with it and that lowering it won't attract more customers.

Customers with transfer-heavy applications will always buy their servers from providers with unlimited transfers like OVH[0][1], where you can do hundreds of terabytes a month with no extra charges (1.5 Gbps * 3600 * 24 * 30 = 486 TB). Even if AmaGoogSoft lowered their transfer prices by 100 fold their pricing still can't compete with OVH.

Companies with enough engineering resources can always go with the best of both worlds: transfer-heavy servers on OVH, and "regular" servers on AmaGoogSoft. The expensive data transfers will only hit smaller outfits, but these customers won't switch because it's not worth the hassle to split your hosting across two providers.

How is this is a surprise to anyone? The big players are all pushing their clouds because its a cash bonanza. It's the SaaS model for hardware, make money forever because your customers never own anything.

I've done the math many times and it's orders of magnitude cheaper to colocate as long as you can afford an IT guy and the upfront cost of hardware.

Worth noting that Digital Ocean doesn't actually bill for bandwidth. They say they do in their Droplet template descriptions, but they really don't. I've pushed many many terabytes to/from my Droplets and never received a bill for it. But you need to cap your individual Droplet bandwidth using something like tc[0] around ~400MB/s or they'll shut off the network interface (DDoS detection).

I dont think its fair to compare GCP's egress costs to a colo's. A collocation is simply sending your packets straight to the internet, where-as GCP routes your packets over private fiber to its closet POP to your user. Giving you better latency.

I suspect the high bandwidth price is a targeting tool and is primarily for repelling those wanting to host seed boxes, porn sites and the like. You can probably get a much better deal if you're paying > 10k$/mo.

Even then, cloud bandwidth is insanely expensive. For example, Hetzner offers 1.3$/TB (if you happen to exceed their generous 30 TB quota). In comparison, Amazon is 70x more expensive at 90$/TB.

Cannot agree more with title. I did a comparative model of AWS bills and colo bills in the context of companies of different sizes (https://blog.paxautoma.com/buy-or-rent-the-cost-model-hostin...). It turned out frequently overlooked costs for bandwidth and provisioned IOPS can be responsible of large chunk of the EC2 bill.

When you buy in mbps, you're actually billed based on 95th percentile usage. So this comparison is way off, depending on traffic patterns, 1mb/s committed can work out to about 120GB in a month on average. If you use reasonable GB per mb/s numbers the cloud providers don't look all that bad.

Keep in mind that with cloud providers you're also paying for the SDN that makes dynamic provisioning of VMs and logical network segmentation possible. Scalable SDN is much harder / more expensive than traditional networking.

FWIW cost for a small biz at most major metro US facilities is closer to $1/mbit for a multi-carrier (which means generally multi-route and high quality with some caveats), and $0.20 if you do something like he.net. For higher volume customers you can easily cut both of those rates in half right now. And you can also participate on public peering switches for generally just a low setup fee at the best facilities.

AWS, GCE, azure seem like the platforms of yesteryear in all dimensions when compared to something like packet.net. I think these providers could be in a rock and a hard place due to the unsuitability of native Linux containers for secure multi-tenancy. This does leave a nice runway for Joyent as both a provider and software vendor for at least a little bit, but I think packet.net is really going to change the economy of infra.

To add to the mix - what if you need a multiple data-center deployment/replication? Both Amazon and Google will provide you a greatly discounted traffic there $0.02 / $0.01. And that's only start. You can easily migrate from one data center to another, with no or little cost attached to it (try that in colo).

It's suffice to say that the cloud providers have a different set of customers in mind. I have servers on both OVH/Linode variety of service providers as well as one single app running on AWS. For the products I run on OVH/Linode, I sell the service at less than $20/month. The one on AWS sells for $200+ per month. Again, it's because of the requirements/SLAs. Based on experience, AWS is a lot more robust for what I'm using it for.

I've always figured that the point of this was to allow a) overall costs to generally scale with the "size" of the customer while simultaneously creating a sticker shock effect on migrating out of whatever cloud. For example, looking at Google, the cost to transfer a terabyte out of their Cloud Storage product is six times the cost of just keeping it there for month. Of course some of this collapses if you really look into it (e.g. you are going to pay that egress anyway if people are actually accessing the data) but I'm not sure that that is always clear to execs doing back of envelope math. I think that to some degree this desire for lock-in is explicitly visible in the asymmetric ingress/egress pricing but I do think that it's a little bit underhanded if I am right because it would mean that slightly-deflated e.g. instance prices would be subsidized by lockin.

This subject is never productive on HN because almost every reply argues with a certain use case in mind but people never actually outline that use case. Those who read your post and reply do so with their own use case in mind and obviously what that other person suggested is madness (in your scenario that you never actually verbalize). Ultimately no one learns anything because they all think everyone is in exactly the same scenario as they are. Or maybe they think their personal choice is a "silver bullet". Probably depends on the poster.

I would imagine contention ratios would also factor into pricing. At least in Australia you might have a 8:1 ratio of actual bandwidth available for a consumer plan, and 3:1 for a business plan. I'd imagine that you pretty much get all the bandwidth you pay for in a data centre. I've certainly saturated 100mbps connections on servers in the past, 24x7. But perhaps people more knowledgeable than me could comment on this?

In order for this comparison to be valid you'd need to get 100% utilization of your colo or Google Fiber pipe. You only pay for what you use with AWS et al. And quite obviously the pricing of GF and Amazon Lightsail assumes less than 100%. Nobody's getting "screwed".

As the author of this post I need to clarify something. I love Amazon AWS, and I love the flexibility and awesomeness of cloud computing. I just don't like the bandwidth pricing ;) Sorry for the interruption, feel free to continue crucifying me. P.S. If someone has more accurate data I'd be happy to update the post or add a guest post. Cheers, Love Arador xoxox

People fail to realize the true cost of operating on S3, specifically when hundreds of TB of usable data is in play

"By putting the "tax" on bandwidth, a lot of these business cases are solved. I see why Amazon does that.AWS is great, but as you get into high scale (specifically in storage - 2PB+), it becomes extremely cost prohibitive."

Amazing to see the number of people who try hard to justify this blatant rip off pricing. This is coming from the same group of people who complain endlessly about the cost of wireless data and telco data caps.

This is a super irritating feature of "enterprise" vendor pricing. What almost everyone on these platforms is doing is moving most of their bandwidth out of one of the CDN services (like say cloudfront) and then negotiating custom pricing on that bandwidth that is often as much as a full decimal place cheaper as long as you sign a couple grand a month yearly commit. There's still this massive massive pricing cusp between using the cloud as a utility and jumping into the suits & drinks & lunches sales guy game.

"How to Win Friends and Influence People" by Dale Carnegie, because it changed my understanding of people for the better.

"Surely You're Joking, Mr. Feynman!" by Richard Feynman, because it gave me a model for how to enjoy life.

"Models" by Mark Manson, because it helped shape my understanding of heterosexual relationships.

"An Introduction to General Systems Thinking" by Gerald Weinberg, because it illuminates the general laws underlying all systems.

Fiction:

"Stranger in a Strange Land" by Robert A Heinlein, because it showed me a philosophy and "spirituality", for lack of a better word, that I could agree with.

"The Fountainhead" and "Atlas Shrugged" by Ayn Rand, because they showed me how human systems break, and they provided human models for how to see and live in, through, and past those broken systems.

"Harry Potter and the Methods of Rationality" by Eliezer Yudkowsky, because it set the bar (high) for all future fiction, especially when it comes to the insightful portrayal of the struggle between good and evil.

Rationality: from AI to Zombies really changed my way of thinking in many ways. It's very hard to describe it or sell it in a few sentences. Partly because it covers so many different things. And partly because I read it so long ago and have already absorbed many of the good ideas in it. They no longer seem exciting and new, and just feel obvious. But they certainly weren't when I first read it.

I constantly see places where an idea from the book is relevant and I want to make people read a chapter of it. Examples include insights into evolution, artificial intelligence, morality, and philosophy. There's a short section on how people tend to argue about the definitions of words and how unproductive this is, that I always find relevant. There's a lot of discussion on various human biases and how they affect our thinking. My favorite is hindsight bias, where people overestimate how obvious events were after they know the outcome. Or the planning fallacy, which explains why so many big projects fail or go over budget.

The author's writing style is somewhat polarizing. Some people love it and some people hate it, with fewer in between. He definitely has a lot of controversial ideas. Although in the 10 years since he started writing, a lot of his controversial opinions on AI have gone mainstream and become a lot more accepted than they were back then.

Sapiens by Yuval Noah Harari. It gave me a good understanding of where we, as a species, came from. What did we do, why did we spread across the planet, how did we replace other hominids? What I really appreciated was his ability to explain some of the underpinnings of society like religion, nation states and currency with a relatively simple idea. Afterwards I felt like "damn that's so simple, I should have thought of that!" When you think that, you know you're on to something good.

On Writing by Stephen King. This a biography masquerading as a book on writing advice... Or its the other way around. Whichever it is, I think it's a great book for any aspiring writer to read. King explains the basics on how to get started, how to persevere and through his experiences, how not to handle success. Full of honesty and simple, effective advice.

Chasing the Scream by Johann Hari. Most people agree that the War on Drugs is lost and has been lost for decades now. But why did we fight it in the first place? Why do some continue to believe it's the correct approach? How has it distorted outcomes in society and how can we recognise and prevent such grotesque policies in the future? This book offers some of those answers.

Only if you're Indian - India After Gandhi by Ramachandra Guha. Sadly almost every Indian I've met isn't well informed about anything that happened in India after 1947, the year India became independent. History stops there because that's the final page of high school history textbooks. An uninformed electorate leads to uninformed policy, like "encouraging" the use of a single language throughout the country. If I were dictator, I'd require every Indian to read this book.

The Master Switch : This really puts a lot of things into context, especially if you're in tech industry. It's basically a history of the entire Information Technology, and it's fascinating how same things happen over and over again, pendulums swing back and forth over and over again, and people keep making same mistakes over and over again. Also you can see the larger picture of why some large tech companies make the decisions they make, and how to successfully compete if you are into that.

You will become a pessimist for a while after reading this, just because it feels like there's no meaning in all this since everything repeats itself and nothing is forever, but when you recover from it you'll find yourself much more insightful about the industry and can make better decisions.

I love all the answers in here but please, please answer with more than just a title! I want to know why I should care about a book -- sell it to me, don't just throw it out there and ask me to do the work.

You hear 'ancient wisdom' on how to lead the good life all the time. These ancient aphorisms came from a time before the scientific method and the idea of testing your hypotheses. Tradition has acted a sort of pre-conscious filter on the advice we get, so we can expect it to hold some value. But now, we can do better.

Haidt is a psychologist who read a large collection of the ancient texts of Western and Eastern religion and philosophy, highlighting all the 'psychological' statements. He organized a list of 'happiness hypotheses' from the ancients and then looked at the modern scientific literature to see if they hold water.

What he finds is they were often partially right, but that we know more. By the end of the book, you have some concrete suggestions on how to lead a happier life and you'll know to the studies that will convince you they work.

Haidt writes with that pop science long windedness that these books always have. Within that structure, he's an entertaining writer so I didn't mind.

The bible, cover to cover: if reading western literature or philosophy produced in whatever year A.D., the bible is required reading for comprehending many the references and various rhetorical modes. I'm irreligious from a muslim background myself but I'm reading it now. Same goes for the qoran, my family is not a practicing muslim family and thus I never read it, but it's a part of the canon, must be read. I'm not sure if I would like to have read these earlier tho, as now I have the consciousness to not be fooled by the stuff in these books.

Karen Armstrong's A Short History of Myth is a very nice guide into mythology and what that and religion are. It's like a vaccine for any sort of fundamentalism or bigotry, if read with some accompanying knowledge of mythological traditions.

Technically this book is about how humans interact with things, but actually it covers a lot more topics that one can think: how humans act, err, how they make descisions, how memory works, what are the responsibilities of conscious/subconscious. Also you'll start to dislike doors, kitchen stoves and their disigners)

Man's Search for Meaning (published under a different title in 1959: From Death-Camp to Existentialism) by Viktor Frankl who survived the concentration camps to go on to develop logotherapy and existential analysis (considered the third Viennese School of Psychotherapy). "lack of meaning is the paramount existential stress. To him, existential neurosis is synonymous with a crisis of meaninglessness", an interesting read, it does not focus on the horrors of the event, instead recognising the human capacity to overcome and rise above.

I wish I had read Real World Divorce, much of which can be found on realworlddivorce.com. It's notable for the fact that Philip Greenspun is a major contributor to it, which I found most surprising and intriguing.

I don't want to duplicate a lot of text, so I'll link to my Amazon review of it:

TL;DR it's the only bit of literature I've found that's got the real talk, and in data-and-comparison driven ways hackers will appreciate.

Yeah, obviously I'm going through a divorce, but I really think this book should be required reading for anyone before they get married in the US. I don't say that lightly or confer that kind of veneration unto books at the drop of a hat.

The origin of consciousness in the breakdown of the bicameral mind, / Julian Jaynes. Hard to tell if crazy or genius, but well worth a read. Read at 38, wish I had read this at 20 or so. Most of us take our inner voice for granted, but should we really? And what if there was evidence supporting the idea that there's another inner voice, but our modern upbringing suppresses it (but it does reappear with some illnesses, under duress, etc)?

Fiction:

Different Seasons / Stephen King. A collection of four stories, NOT your usuall King horror genre; one of which became the movie "Stand By Me". another became "The Shawshank Redemption", the third became "An Apt Pupil", and the fourth will likely never become a movie. All are excellent. I actually read it at 16, which was the right time, but I'll list it here anyway; if you've seen the movies and liked them, it's worth reading - the stories are (a) much more detailed than the movies, in a good way, and (b) related in small ways that make them into a bigger whole than the individual stories.

Management (software/hardware oriented):

Peopleware / Demarco & Lister - read after I was already managing dozens of people. Wish I had read it long before. This book is basically a list of observations (with some supporting evidence and conclusion) about what works and what doesn't when running a software team. Well written, and insightful.

The mythical man month / Fred Brooks - wish I had read this before first working in a team larger than 2 people. Written ages ago, just as true today; A tour-de-force of the idea that "man month" is a unit of cost, not a unit of productivity.

IMO you won't really understand the nature and limitations of fiction until you've read JLB. His work won't change your life, as such, but it will divide it into two parts: the part that took place before you read him, and the part that comes after. You'll always be conscious of that division.

The Four Steps To The Epiphany by Steve Blank. I've learned more about "what goes into building a startup" from reading this book than any other book I've read.

The Fountainhead by Ayn Rand. One of the most inspirational stories I've ever read. A strong reminder to remain true to yourself in the face of all sorts of challenges and adversity.

Mastering The Complex Sale by Jeff Thull. I don't claim to be a great, or even good, salesman. But if I ever become any good at selling, I expect I'll credit this book for a lot of that. I really like Thull's approach with is "always be leaving" mantra and focus on diagnosis as opposed to "get the sale at any cost".

The Challenger Sale by Brent Adamson and Matthew Dixon. Like Thull, these guys deviate from a lot of the standard sales wisdom of the past few decades and promote a different approach. And like Thull, a core element is realizing that your customer aren't necessarily fully equipped to diagnose their own problems and / or aren't necessarily aware of the range of possible solutions. These guys challenge you to, well, challenge, your customers pre-existing mindsets in the name of helping them create more value.

The Discipline of Market Leaders by Fred Wiersema and Michael Treacy. A good explanation of how there are other vectors for competition besides just price, or product attributes. Understanding the ideas in this book will (probably) lead you to understand why there may be room for your company even in what appears to be an already crowded market - you just have to choose a different market segment and compete on a different vector.

How to Measure Anything by Douglas Hubbard. It's pretty much what the title says. This is powerful stuff. Explains how to measure "things" that - at first blush - seem impossible (or really hard) to measure. Take something seemingly abstract like "morale". Hubbard shows how to use nth order effects, calibrated probability estimates, and monte carlo simulations, to construct rigorous models around the impact of tweaking such "immeasurable" metrics. The money quote "If it matters, it affects something. If it affects something, the something can be measured" (slightly paraphrased from memory).

I wish I'd read each of these much earlier. Each has influenced me, but I'd love to have been working of some of these ideas even longer.

"Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers" by Geoffrey A. Moore and also his recent "Zone to win". His books explain some of the "deeper structure" to tech business, and is one of the few business-related books I've read that has any depth. By "depth", I mean in the sense that I'm used to from research mathematics (I'm a number theorist by training), where you learn something about a problem that lets you think about problems in a more detailed way.

The Art of Learning by Joshua Waitzkin. I was definitely in the right place to take in the topic, but it was, more or less, a book on how you can be "good" without much effort, but to be great or the best, it takes a lot of hard work and time. This book helped me learn that lesson.

On top of that, some of Tim Ferriss' stuff on accelerated learning. Learn how to learn first, then learn everything else.

i discovered 'the phantom tollbooth' in grad school (for some reason, it was pretty much unknown in india when i was growing up). i'm pretty sure kid me would have loved it even more than adult me did.

_The Beginning of Infinity_ changed my worldview from thinking progress is slowing down or problems in the world are overpowering to a more hopeful one where problems always be there for humans to solve, and that through human activity we can keep making progress. It also gave hope that one day in future, we might be able to clearly see that good, bad, evil, love, beauty might be fundamental aspects of universe, just like gravity, atoms, and radioactivity is. It also walks through philosophy of science (v/s pseduo-science). All in all, I wish I had read it earlier.

_Feeling Good_ because of the tools it contains to battle self-defeating feelings that lead bouts of sadness or depression. I wish everyone would read that book so that they can build mental immunity against circular, depressing thoughts.

"The Self-aware Universe" by Amit Goswami. Opened my eyes to a new way of looking at the world around us, and finding new ways to react to events that affect us. Wish I'd read this when I was much younger - before I had decided with a high level of confidence that I am completely in control of everything I do, all that happens to me and how I react to events. Seeing yourself as a minuscule part of a whole you perhaps will never fathom, allows you to simply focus on doing your best when you can and not get overly possessed with results. One of the many mystic-physics books that were very much in fashion for a while, but the one that stuck to my consciousness the most.

_The Art of Electronics_. As a software guy who sometimes is involved in embedded systems, having a good understanding of what's going on at the resistor/capacitor/transistor level would have helped a lot. I did a bunch of hobby electronics as a teenager, but never had circuit theory. I knew a lot about digital design, but not the analog stuff that the whole world ultimately rests on.

So now, when I hear a switching power supply whine in protest, I will think of it as the squeals of pain of the engineers whose life I turned into a living hell because of my lack of appreciation for P = IV. Im truly sorry. I wasnt thinking. (And this is just the first chapter of that book).

I did read it fairly early and it had an quite an impact on my life and thinking. It put into words a lot of my discomfort with a life focused on materialistic success. And it was inspiring seeing an intelectual combining so many of the thoughts and topics he developed during his lifetime into one coherent and approachable book.

I found it by working my way through the list of joint nebula and hugo award winners (which is a really fun project, because all of them are amazing books). It is my favorite sci-fi book. It changes the way you look at gender, especially if you haven't questioned the concept much before.

I read it at 18 and I wish I had read it way earlier. It taught me to be mad, to live life, to get out and see the world. But looking back at it, it also taught me how to be responsible and how to not to be a jerk.

4 Hour Work Week, it gave me some perspective on the 9-5 job I wish I had given more thought to earlier in my life when I had more time.

80/20 principle, while mentioned in the 4 hour work week, it really has a lot more to offer in the book. How you should go about leveraging your time. There was a real gem in there about how books are really the best way to acquire knowledge and a great way to approach reading in the university.

There was a speed readying and studying book I came across from a friend that owns a book store that really helped me. I wish I had that book before I entered high school. I can never recall the name, but I will try to find it.

The War against Women (Marilyn French) - the underlying premise is wrong, but reading it is a good way to learn how to deal with semi-rational, but insane theses. And yes, I can defend this position with quotes / paraphrases from the book, with rational explanations as to why it's insane

How the Police generate false confessions (James Trainum) - former cop explains why harsh interrogation techniques are counter-productive, and how to defend yourself

Get the Truth (Philip Houston et all) - how to tell when people are lying, via simple techniques you can remember

"The Silk Roads: A New History of the World" By Peter Frankopan. This book tackles essentially all of human history, tying together the world's major cultural shifts with the socioeconomic forces that brought them to pass. For readers who have implicitly come to believe that the center of the world has always been Western Europe (I had), this book will greatly shift your perspective (Eastward). I've never learned so much from a book, and damn is it entertainingly written.

"Getting Things Done" by David Allen. I'm sure everyone here is familiar with bits and pieces of GTD methodology, but I encourage you to check out the full text. There are a lot of great ideas in there there that I didn't find reading online about GTD. I have been a serious GTD user for more than a year now, and I feel amazingly more in control of my life. Everything I've done in that time - from planning my wedding, to projects at work, to completely organizing my house - has gone smoother than I can remember projects going ever before.

How to Become CEO: The Rules for Rising to the Top of Any Organization by Jeffrey J Fox

I found this book in a library's junk pile, evidently unread. It has one of those bad 80s covers that suggest it'll be terrible, but to my great surprise, it's great! It's 80 or so one page missives/dictums/edicts that'll take barely half an hour to read through - I re-read it every time I have a job interview coming up or a some kind of major life choice. The author's tone is abrasively direct; this is how it is, not how it should be. And the advice isn't just for wannabe CEOs, it's accessible and attainable for everyone.

Eric Hoffer, The True Believer. You will see applications for the principles in this book in all aspects of society and politics. Easy to read and unassailable insight into what makes people join a common cause.

I always found this question pretty impossible to answer. There are so many books that i find myself wanting to recommend, and the list soon becomes unmanageable. So, instead i'm going to provide a different resource - Patrick collisons whole library. He color coats the books he thinks are great, and lists hundreds of books. https://patrickcollison.com/bookshelf

"Out of the Crisis" by W. Edwards Deming. The author was one of a handful of people who helped the Japanese apply methods of statistical control to their manufacturing processes, which in turn helped them to become an economic superpower after their country's occupation by the Allies. In the book the author takes a deep look at the problems of management in the United States, and provides a list of reforms that would lead businesses "out of the crisis". I only recently learned of W. Edwards Deming, and I wish that I had known about him much earlier.

How We Got to Now, Steven Johnson. Walks you through a half dozen foundational inventions and the process through which they came to be. Fascinating to see what the inventors were trying to solve vs. how the world ended up applying their technology.

Unbroken, Laura Hillenbrand. If you haven't read the book don't judge it by the (awful) movie.

The Liberators: My Life in the Soviet Army. Really opens your eyes to the problems and realities of communism. I love the author's dry sense of humor as he witnesses the absurdity of many of the things he encountered.

Sniper on the Eastern Front, Albrecht Wacker. A view of WWII through the eyes of a German sniper.

Auschwitz: A Doctor's Eyewitness Account, Miklos Nyiszli. A view of the holocaust through the eyes of a Jewish doctor in the Auschwitz concentration camp.

Animal farm by George Orwell: a revelation of the beginning and end of revolution and 'change'.Jewish wisdom for business success.Call of the wild by Jack London: it shows how possible it is to adapt in order to benefit maximally from change -- using a dog's (Buck) life.

In high school I was assigned this book but I didn't read it all, it seemed like a waste of time to read 1000+ pages about a silly knight.

A few years ago I got into reading a lot of fiction translated from Spanish and Don Quixote got back on my radar so I decided to give it another try. I was blown away. It's astounding that a book from 500 years ago is still so funny and engaging today. Grossman's translation makes the book accessible and very enjoyable. If you didn't know the history you'd believe it had been published in the last few decades.

I recommend this because it's the best example of how literature can be time travel. When I smile at one of the adventures in the book I know that I'm sharing an experience with readers across centuries. There's almost no other way to get that feeling.

Morris uses his background as a zoologist to examine human beings as a regular animal; many books have come out of this approach. In this one he draws parallels between the city-dwelling human and the caged animal. This sort of perspective gives you self-awareness about your own tribalism and how we as a species deal with the opposing forces of individuality and longing to belong to a group. Also some ideas on the urban-rural divide that has consequences that leave people on either side puzzled (Brexit, Trump etc.)

the hard thing about hard things(Ben Horowitz): This book is mostly recommended for managers but I found it very useful to adjust my estimations about life. Also, you will learn about silicon valley history and it's dynamics.

The fifth discipline (Peter Senge): This book is one of the systems thinking references and it helped me to learn more about hidden dynamics in the world around me. I truly wish I've read this when I was junior in college.

"Peak: Secrets from the New Science of Expertise" because I've been learning ineffectively my whole life not knowing that I was. Should be required reading for every 15 year old. The best, most science based book I've ever read about learning effectively.

It opened my mind to understand metaphors and analogies in literature. It allowed me to peek under the surface of text. Seriously, every written piece I read after that was different for me than before.

It also gave me more insight in the human mind and psyche.

Being able to read and understand more literature also gave me more perspectives and deeper understanding of the world and place of mankind in it.

Some other nice reads:

"The Way of Zen" - Alan Watts

"The Book" (On the Taboo Against Knowing Who You Are) - Alan Watts

"Demian" - Hermann Hesse; but I wouldn't want to read it earlier. I think I read at the exact best time for me (in my late 20s).

Should be called How Minds Evolve as Heirarchies of Darwinian Turing Machines ( analagously to Deep Neural Nets (Dennet cites Geoff Hinton and Edinburgh's Andy Clarke).

"working computer models have been developed that can do a good job identifying handwrittenscribbled, reallydigits, involving a cascade of layers in which the higher layers make Bayesian predictions about what the next layer down in the system will see next; when the predictions prove false, they then generate error signals in response that lead to Bayesian revisions, which are then fed back down toward the input again and again, until the system settles on an identification (Hinton 2007). Practice makes perfect, and over time these systems get better and better at the job, the same way we doonly better" p.178 [1]

"Hierarchical, Bayesian predictive coding is a method for generating affordances galore: we expect solid objects to have backs that will come into view as we walk around them; we expect doors to open, stairs to afford climbing, and cups to hold liquid. These and all manner of other anticipations fall out of a network that doesnt sit passively waiting to be informed but constantly makes probabilistic guesses about what it is about to receive in the way of input from the level below it, based on what it has just received, and then treating feedback about the errors in its guesses as the chief source of new information, as a way to adjust its prior expectations for the next round of guessing."

Which echoes Richard Gregory's concept of vision (or perception) as a hypothesis continually tested against input.

This is Paradigm shifting; weltanschauung shattering stuff. Dennet very clearly lays out a methodology for how all aspects of minds can evolve using heirarchical compositions of wetware robots or :

"Si, abbiamo un anima. Ma fatta di tanti piccoli robot!(Yes, we have a soul, but its made of lots of tiny robots!)" p.24 [1]

1) Superintelligence. This is a really great read about the implications of AI, or general intelligence. It's really intriguing and brings up so many scenarios I've never thought about. Anyone interested in AI should definitely read this.

Similarly, On Intelligence is an absolutely brilliant book on what 'intelligence' is, how it works, and how to define it.

2) Hooked. Although it's very formulaic, Hooked provides a lot of good ideas and approaches on building a product.

3) REWORK. If you're a fan of 37 Signals and/or DHH, this is a succinct and enjoyable read about their principles on building and running a business.

Currently I'm reading SmartCuts and The Everything Store - both of which are great so far.

Anna Karenina, A Suitable Boy, and the like. Excellent books but after college it's been difficult to start and keep at them in a acceptable period of time given the time (or lack of it) is an issue now. I also wanted to read Ulysses. I am stuck around the ~20% of Dostoyevsky's Idiot since a long time. Off late I've had better success with shortner ones.

For me the reason is simple - it's just the daunting number of pages and it is a shame that I have not read/finished these books.

I'm 30 now. I wish I had read this when I was 20. It would've made dating in my 20s so much easier. I came across it last year and it's probably the single most important book I'll ever read in my entire life, for the sole reason that understanding women will allow me to have a successful marriage one day. I cannot recommend this enough.

There are some books I keep coming back to when I am "feeling lost and/or hopeless", when my "back is up against the wall and/or feel cornered", when I feel like I have "hit rock bottom" or I just need to "escape reality"... This list contains books I have read/listened to more than a couple times:

!For inspiration:! 1. Loosing my virginity (Richard Branson)- Richard Branson's Autobiography. From student magazine to Virgin to crazy ballooning adventures and space! I keep coming back to this when I feel like I need a morale boost. There isn't an audible version for this book, but there is a summary-type version on Audible "Screw it, Let's do it"- does a good job curating the exciting parts.

2. The Everything Store (Brad Stone)

-AMAZON and the man leading the massive team behind it. Jeff Bezos is quite easily one of the most important and influential people in the world. His relentless pursuit to build Amazon (& it's various products) amid constant setbacks, losses and naysayers... I personally use Amazon and their products every day. It's a really interesting view of how things are run backstage.

3. Steve Jobs (Walter Isaacson)

- One of the most popular books in the Valley. Almost all startup founders I have met has read this. They usually have a very polarized view of Jobs after reading this. Take the good stuff and leave out the bad/crazy. Jobs was a very polarizing person and so is his biography...This is a very long book. "The second Coming of Steve Jobs" by Alan Deutschman is another really good book and a much shorter read and not super-polarizing (leaves out some of the crazy stuff from early life). Other notable Steve Jobs books I have read & highly recommend: Becoming Steve Jobs & The Steve Jobs Way.

4. Elon Musk (Ashlee Vance)

-Another polarizing book. I am a Spacex & Tesla Fan-boy. I picked this up in 2015 the day it was launched! I have read this at least half a dozen times by now. Hard-work, perseverance and creativity to the max. A must read for every entrepreneur.

5. iWoz (Steve Wozniak)

-If you are a technical-founder, this is a must read! Gives a very interesting view of- behind the scenes at Apple during its inception and early years. I was really moved by how humble Woz was/is and I am inspired by his problem solving approach.

6. How Google Works (Eric Schmidt, Alan Eagle & Jonathan Rosenberg)

- A very good book to read after/before this: "In the Plex" by Steven Levy. Hands down the two most important / influential books while you are starting something new. I read these while I was contemplating conceiving my startup and giving up the "safety" (illusion of safety) of a "normal-job". A must read for anyone planing to start a company and want to take it to the stratosphere (or higher)!

7. Dreams from My Father (Barack Obama)

- Another polarizing personality. A short but powerful memoir by Obama. This gives a unique insight into Obama's thought processes. Most people can relate to this and every "Leader" must read this. It really helps clear some of the fog on- what makes an effective leader.

!Business & Management:!

1. The Upstarts (Brad Stone)

-An amazing story about AirBnB and Uber. Culture is key and culture is defined by the Founders and the first few hires. The two companies are extremely similar in many ways (timing, shared economy, disruptive) but radically different in the way they are run. This came out earlier this year and is probably one of the best "startup-books" of 2017!

2. Zero to One (Peter Thiel)

-A very short book, a must read for every entrepreneur. Dives into "first principal" thinking & execution. A very good read after/before "Elon Musk" the biography by Ashlee Vance.

3. The power of Habit (Charles Duhigg)

-I have always wondered how successful people get so much done. They have the same amount of time as everyone else, but they are able to get so much more done...how? This book answered that question. Ever since, I have been using "Habits" as my ultimate personal tool. Day & night difference when you figure out how habits are formed how they are broken and how you can influence the process. A good companion book (from the same author) "Smarter Faster Better".

4. How to win friends & Influence people (Dale Carnegi)

- I bought this book freshman year in college. I tried reading it then and gave up / got bored after the first few pages. I really wish I had actually made an effort to read the whole thing. It sat on my shelf collecting dust. Luckily I picked up the book again and gave it another shot. I read this during a particularly "rough-patch" at our startup- really helped me cope with the "situation". What was once a boring book is now scribbled with notes, bookmarks and highlights. A very useful life-guide.

5. How to win at the Sport of Business (Mark Cuban)

- A very entertaining yet eye-opening book. It is very short, finished it in a couple hours. A must read for every entrepreneur. I keep coming back to this when I feel like things are going dreadfully slow and I need a boost. If you follow Mark Cuban's blog, skip this. It is mostly a summary of his blog posts.

6. Finding the next Steve Jobs (Nolan Bushnell)

- Finding good talent and retaining it is probably the single most important thing you will do as startup founders (especially if you are the CEO). Many things in this book seem obvious (if you are familiar with the Silicon-valley culture). A good read before you set out to hire your dream team of "rockstars". A good companion book: "Outliers" By Malcom Gladwell.

7. The hard thing about hard things (Ben Horowitz)

-Are you in a startup? If the answer is YES, then read this NOW. Ties well with "Finding the next Steve Jobs". I wish I had read this before I started my company. I have lost track of how many times I have listened to this audio-book.

8. Start with the Why (Simon Sinek)

- Mid-late 2013 I came across Simon Sinek's ted talks on the golden-circle and my mind was blown. I bought the book the very next day and I keep coming back to my notes whenever we are starting a new project. Get the "Why?" right and the product will define itself. This is true for building companies as it is for building great products. A must read for every entrepreneur.

9. Art of the Start (Guy Kawasaki)

-Getting ready to pitch? read this! Also watch Guy's many presentations/talks on YouTube. A good companion book- "Pitch Anything" By Oren Klaff

-I have heard that not everything in this book is "completely-true" (more distorted than others...) but still a great read!

3. The Martian (Andy Weir)

- Hands down the best science fiction book I have read. I have lost count how many times I have listened to the audio-book (probably >15). I want to go to MARS!

4. Harry Potter Series.

-My go-to "background noise". I read the books as a kid. I use the audio-books to tune out the world when working on stuff that does not require my full attention (Listening Goblet of Fire as I type this)...

5. Jurassic Park || The Lost world (Michael Crichton)

- Read the books as a kid. I usually listen to it while I am traveling. Still gets me as excited as it did when I first read the book. (The movies are nothing compared to the book...)

6. Ender's Game (Orson Scott Card)

- I am looking forward to reading the entire series. Read it once, listened to it many times (lost count). I love Space!

7. Ready Player One (Ernest Cline)

-I picked this book up while I was working on a VR project back in 2014. An excellent book for re-reads and a nice place to get some inspiration.

"How to get what you want", by Raymond Hull. Everything else follows, like a bootstrapping process. Wish I had found it 10 years earlier. Changed my life forever. I could recommend dozens other books, my walls are lined with shelves of books, but you and me are different and all you'd need is this one book to find everything else you'd need to read or do.

- So Good They Can't Ignore You- Deep Work- Hackers by Steven Levy (perhaps my favorite book)- Learning How To Learn- The Person and the Situation- The Art of Money Getting- Make It Stick- The Algorithm Design Manual- Moonwalking With Einstein- Extreme Ownership

The Last Hours of Ancient Sunlight: The Fate of the World and What We Can Do Before It's Too Late

This book is a detailed research on what's wrong with the world and what can be still done. The chapter II brings inputs from various culture on approaches that could improve from ground up. Must read book for us and future generations.

It discusses the intrinsic characteristics of work that lead to satisfaction, growth, mastery, and ultimately happiness. The author is a PhD, worked at a think tank, and quit the white-collar life to go work on motorcycles. He discusses how white-collar work has been hollowed out, transforming "professionals" into "clerks", why so many of us "knowledge workers" feel unsatisfied with our work. The book has helped me figure out how to change my work to be more intrinsically rewarding, and as an IT developer whose technology affects other people's work, it also helps me think more about how to make the end user's life better.

Another great book along these lines is Joanne Ciulla's (2000) "The Working Life", which is a bit more academic and has less motorcycles but is nevertheless very readable.

How to See Color and Paint It -- It taught me how to see color and paint it. Also how to use a palette knife which makes my paintings very different and fun.

Remembrance of Things Past -- I'm still reading this, as it's a massive stream of consciousness book, but I wish I'd started it when I was younger so that I'd be done with it by now. It's just so weird to read it and experience the writing that I enjoy it for simply being different. As you read it just remember that every ; is really a . and every . is really \n\n.

Van Gogh: The Life -- I absolutely hate the authors. They're great at research, but I feel they had a vendetta against Van Gogh of some kind. Throughout the book, at times when Van Gogh should be praised for an invention, they make him seem like a clueless dork. Ironically, their attempt to portray him as a dork who deserves his treatment ends up demonstrating more concretely how terrible his life was because he was different. I think if this book were around when I was younger I might have become an artist instead of a programmer.

A Confederacy of Dunces -- Absolutely brilliant book, and probably one of the greatest examples of comedic writing there is. It's also nearly impossible to explain to people except to say it's the greatest example of "and then hilarity ensues".

Mickey Baker's Complete Course in Jazz Guitar -- After a terrible guitar teacher damaged my left thumb I thought I'd never play guitar again. I found this book and was able to use it to learn to retrain how my left hand works and finally get back to playing. Mickey Baker's album also brought me to the Bass VI, which got me thinking I could build one, and then I did and now I've built 6 guitars. I play really weird because of this book and I love it. This book also inspired how I wrote my own books teaching programming and without it I'd still be a cube drone writing Python code for assholes. If I'd found this book when I was younger it most likely would have changed my life then too.

Reflections on A Pond -- It's just a book of this guy painting the same scene 365 times, one for each "day of the year" even though it took him many years to do it. All tiny little 6x8 impressions of the same scene. I learned so much about how little paint you need to do so much, and it's also impressive he was able to do it. I can't really think about anything I've done repetitively for every day of a year. I've attempted the same idea with self-portraits but the best I could do was about 3 month's worth before I went insane and started hating my own face.

Alla Prima: Everything I Know About Painting -- Instructionally this book isn't as good as How To See Color, but as a reference guide it is about the most thorough book on painting there is. It's so huge it's almost impossible to absorb all of it in one reading, so I've read it maybe 5 times over the years.

You might add in some basic (useful) encryption like RSA. There's a running joke in the CS department at my university that every other class has to teach RSA, so it is not incredibly difficulty to do.

Thanks for this - I'm just beginning on the long journey of learning programming, and have been looking for challenges to set myself now that I have (some) of the basics down (but large, practical projects are still somewhat daunting).

I'd like to make a wikipedia-of-code where each page corresponds to a "project" like the ones listed here. The project entry would look like a Jupyter notebook, and you can edit/execute the code freely.

I've actually been working on this while trying to help a friend learn programming.

I've been looking for some meatier projects that still have good feedback loops, without requiring too much domain knowledge.

For example, I've been working through a project with the friend that involves scraping prices off of a website and trying to build a thing to automatically order things off of the websites.

Multiple distinct parts, each with their own, very visible, success state. But at the same time, not too many challenging domain specific issues (main issue was just explaining CSS selectors and form POSTs).

This looks like a great resource. I buy a lot of used tools on Craigslist, and of course, nobody ever keeps the manuals. So it's always the same time-consuming task:

1. Go to the manufacturer's web site, if they still exist, and see if they have a manual there

2. Search Google for "MODEL# pdf". Wade through pages of pond scum search engine spam and paid sites for a half hour. Apparently, enough people search for manuals to make this profitable.

3. Do some web research to find similar product model numbers (maybe 8029A manual would cover 8029B too?) and repeat 1-2 above.

4. Start searching through forums and other hard-to-index parts of the web.

5. Check torrent sites? (now I'm getting desperate!)

It's crazy how tough it can be to find a user manual. In many cases, I end up finding one scanned by another end-user and posted online to be helpful. It's also a shame that 1/2 the comments here are about copyright. I can't see how taking a site like this down would in any way benefit a manufacturer whose manual is available. Unless the manufacturer is trying to make money selling their user manual, in which case to hell with that shitty company.

I've started saving electronic copies of manuals, assembly instructions, etc for anything that I purchase. If there's no electronic copy available, I'll scan the paper manual. In all cases, I'm putting the documents into an instance of Mayan EDMS[0]. Mayan also automatically does OCR on everything that comes in, so even if the PDFs are non-OCR'd scans they're still searchable.

This is part of a larger project to significantly reduce the amount of paper that I'm keeping, which is why I'm using a document management system as opposed to a Dropbox folder. My goal is to divide the mounds of paper into things I need to keep for a long time (e.g., tax documents), and things that I can shred after a year (e.g., bills, receipts, etc). In all cases, I want the documents searchable and backed up.

The manuals have a big watermark right through the center of each page, which isn't even translucent; it completely obscures the content behind it. See https://www.manualslib.com/manual/464698/Honda-Civic.html?pa... for a random example, where the watermark completely obscures the model number of the Honda Civic's automatic transmission.

Is there some way to pay to remove the watermark? Is that how this works, these manuals are effectively just free previews?

Copyright worries notwithstanding, it is a great resource. The first thing I do unpacking anything new is searching the internet for a manual (usually PDF) and saving it to my Dropbox. I keep paper manuals around for a while but recycle them after the end of warranty period to reduce clutter.

I wish every manual was mandated to come with QR-code or at least short URL to its own electronic version.

Incredible.. I was literally yesterday looking for a manual for my 80's boat motor of unknown model. A quick lookup and visual approximation allowed me to match the model on the manufacturer's site and download the series user's manual from here.

The PDF has been OCR scanned and allows searching. This is way easier than ordering the manual from a reseller. Copy to cloud, and now I have online copy of the manual always in my pocket..

I understand it is copyright infringement, but still super-useful. And I might still order a physical copy if the digital copy proves helpful.

i just used this for a new washing machine. the Electrolux support site requires one to provide the exact model number to search manuals--no browse, no index. rather than go downstairs to read the number off the machine i searched the web and found manualslib.

so to the point besides being annoyed by crappy manufacturer websites: should i be worried about exploits buried in pdfs? isnt it possible to hide rootkit attacks in a pdf?

The article doesn't discuss the seniority of the fired employees. It's very hard to distinguish between scapegoating and actually working to fix the problem without that information.

The problem with Uber isn't that some employees engage in sexual harassment. The problem is that there's a culture where sexual harassment isn't taken seriously, is tolerated if the perpetrators are "high performing" in other respects, and there's common knowledge that being "high performing" covers all sorts of other ills. No amount of firing of the low-level scum that grows in this environment will fix this issue.

I wonder if events like this turn Uber employees who leave around this time into a market of lemons. Obviously nobody will put on their resume that they were fired for sexual harassment so hiring managers will have to wonder if a person who left Uber recently was fed up or fired.

Seems risky to hire a recent Uber employee at this point because bringing a toxic sexist into the company can inflict massive damage.

Even if someone wants to genuinely switch job within next few month out of Uber, It would raise eyebrows at next interview table. So Uber gonna have very mall attrition rate now. Every action has both side of coin. :)

> Theresa May says there should be no "means of communication" which "we cannot read"

The article focuses largely on the technical difficulties and implementation risks that make this goal impractical. I would like to point out that the goal in question is explicitly Orwell-style surveillance.

I'm also deeply concerned about "backdoors for the good guys". Beyond just worrying about who else could get access, the "good guys" really just means government, and my comfort level with the current Trump administration using their "good guy" backdoor for a noble purpose currently sits at zero.

All that being said, how do we the tech community solve Theresa May's problem? Her philosophy is "if we knew more, we could have prevented this." Is that the right philosophy? Is there some other mechanism to authorize "legitimate" access to encrypted data?

Why even permit people to have secrets of any kind? The real "problem" is not encryption, but people keeping secrets. Encryption is just one way of keeping a secret. With a law banning private secrets, they could throw anyone in jail for not answering a question.

If the government has a back door to read all your messages, they are saying they don't want you to have any secrets at all -- but electronic messages are the only ones they know how to pry open.

It helps to give non-computer people a non-computer analogy: This is equivalent to requiring the walls of the houses we live in and the clothing we wear to be made out of special material which is opaque to us but see-through to the police. This will keep us all safe! Anybody got a problem with that?

You can hide meaning in plain English just by making it hard to read your intentions or the context that the sentence is given in. Take double entendres for example. Should we ban any grammatical construction that could possibly hide some second meaning?

This propaganda about good guys backdoors being impossible again. This is cryptographic bullshit of the highest degree. We've had the DUAL_EC scandal, for once, as an example of NSA backdoor which as far as was proved, only the NSA could crack. And with the fact that the NSA had so many bad leaks, yet still everyone except the NSA can't crack it, proved that backdoors are not only possible, but were going on behind our backs.

Don't get me wrong - I'm completely against backdoors. But when you shift the argument into "we won't do it because it's impossible", you're already agreeing that it should be done, while your argument won't hold because it is in fact possible.

There's three kinds of people: 1. Non-technical people (theresea may) that want backdoors are don't care about whether it's possible or not.2. Technical users, with a vague general knowledge of cryptography, and the imprinted thumb rule of "backdoors are bad"3. People with actual knowledge in cryptography which had already been doing research about why it is possible for years. Just a teaser: https://en.wikipedia.org/wiki/Kleptography

Of course, the real issue would be the scale and the distribution of access to the backdoor to various agencies.

Yes, let's increase the size of the haystack. I've yet to see proof that having the ability to read each and every communication would have prevented any attack that from the last couple of years. For the most part the criminals communicated in clear text SMS or on open phone lines or in game chatrooms. If they needed advanced crypto that would prove at least that we have done everything else to make their lives harder, but so far it looks as if there is plenty of low hanging fruit.

Suggestion: reduce the size of the haystack further so that limited manpower can be concentrated on those cases where it is actually useful rather than to chase each and every 16 year old with a twitter account or a facebook page.

It really does seem that these people have lists of power grabbing desires ready and waiting for the next attack. I imagine this woman's first private reaction to the news of this attack was not sadness or concern for the victims, but joy/excitement about the possibility of exploiting the situation to achieve her political goals.

The real issue on putting backdoors, the way I see it, is that once you gave one government an access, you've already set a precedence and you can now be compelled by other governments to do the same.

Giving other governments backdoors would actually hurt the original country more than the backdoor could ever help it.

Segmenting the software according to which government is given backdoor will freeze the whole industry, and you would still have the unsolvable problem of imported protocols with different countries backdoors.

If the problem was only "good guys"-"bad guys" it would be solvable, but there are no good guys. There are so many countries, and each of them trust only themselves.

Perhaps we should no longer assume that politicians 'do not understand the internet' and assume they are asking for changes in the full understanding that they don't achieve the goal for which they're introduced.

As long as the situation that's being created is more favourable for them than the current one it's a net benefit.

Short-term politics is the biggest threat to UK society at the moment and the current government is particularly good at it.

This is a good primer on the technical side of this madness. It is quite accessible and therefore a good one to share with people who don't yet realize that what May is trying to do will mean the end of the internet and tech industry in the UK.

It does not help that the tech companies preach a kind of false equivalence here. Part of their argument is that they are investing a lot in ensuring they can "take down" these posts quickly.

So, the legal system in most countries is that if you post something explicitly (not in jest, or metaphor, but quite deliberately and with the intent of other people's death) asking people to murder other people, that is something that you perhaps ought to be charged with an offence over.

Meanwhile Facebook, etc, essentially argue that ok, they posted a request for the murder of a lot of people, but hey, we took it down after only a few thousand people read it, and we've closed the account (until they open another one), so that's job done, no need to prosecute any further, you shouldn't ask us to cooperate with police, we've got adverts to sell here.

Small wonder that governments are changing the law, when tech companies regard requests to kill people as something that, if really pushed, they'll treat as equivalent to how they handle copyright infringement, but actually there's less money in it for them so would you mind if it was a bit further down the planned feature list.

The list which the author introduces with the words "This, then, is what Theresa May is proposing:" sounds mad to anyone who knows anything about cryptography but they (Theresa May doesn't differ much from her western colleagues here!) really mean it! So even though they will not be able to fully succeed in reaching their goal, they are going to make our digital lives very, very miserable if we don't find a way to short-circuit such insane plans asap!

This is obviously nonsense - but I'm not envious of someone who has to stand at a podium and say something that will make voters believe you'll sort this problem. NigelFarage just suggested internment camps (on Fox News). That's where we are now.

I don't think May's suggestion has any way of ever working - but her listeners don't understand that. This is populism in its purest form.

I think by now our surveillance protectors already have total information awareness, well perhaps 90% of the information and not very great awareness. But i doubt lack of information is limitation on awareness. They could do a lot better even with less information, i suppose.

So why demand ever more intrusive powers? I think its just an excuse, and that they dont have great ideas to preempt attacks.

> If you want a preview of what a back door looks like, just look at the US Transportation Security Administrations master keys for the locks on our luggage

Good point. I have the same issue in my building. The postman has a master key to open the mailboxes. Apparently, these master keys are now well-spread and I can't order packages anymore as they get stolen.

If 99.99% of made transactions are not malicious in nature why should people who made those transactions suffer? There are other means to detect suspects that dont ask for a cost of a privacy of the nation.

On the whole other point of this dont you think theres a chance to the possibility that certain terror/cyber attacks were made by some intelligence agency? Timing on this is too convenient.

Besides, there is no known method that can resolve all types of cryptographic methods thus it makes it useless spending of taxpayers money.

To all my still-EU-cocitizens: you still have the possibility to leave your neo-facist country for another part of Europe. It will not get better, as the anglo-saxon world is currently destructing itself.

The article is pretty sniffy about Hogwarts' security, but it's missing an important point: magic can detect intent. So yeah, it might be possible for Harry Potter to build what Theresa May is asking for, but a) he doesn't exists and b) I'm pretty sure it'd be against his progressive principles.

Imo this isn't about legislative of today, rather about the philosophy/politics of today responsible for the legislative of the future.

Today's cryptography is like the ice sculptures art, we could show a lot, but on unstable timeline.

The true art is going to come with the quantum computers and the governments will have to have legislative for someone sending messages to someone else because they won't have any other tool available.

Using fear to push for restrictive laws is not news, pretty much every government did it or will do it one day. It's just too tempting, people will accept the restrictions as a mean to fight terrorism/pedophiles/serial killers/$CURRENTENEMY etc. until the new laws will be slowly and silently used to quench dissent.

"The issue of payments to families of suicide bombers and others who commit violence has become a frequent complaint by Israel and its supporters.The Palestinian Authority spends about $315 million a year to distribute cash and benefits to 36,000 families"....

Only last week the Palestinians named a women's center after Dalal Mughrabi who hijacked a bus on Israels Coastal Road and killed 38 civilians, 13 of them children, and wounded over 70.

I feel the pain of the British from the terrorist attacks, but why don't they stop all funding to the Palestinians until we can be certain they are no longer funding terror nor glorifying terrorists? Why doesn't President Trump stop all funding if he is truly serious about combatting terror?

If I had to explain the basics of "information" and "communication" to average people, I would say something like this.

There is no difference in the communication over the internet, over the letter mail, or verbal communication with voice. The encryption can be used in any form of communication. And the problem of banning it is always the same.

Banning the encryption is impossible simply because detecting the encryption is impossible. When you see two persons on the street, one says that the weather is nice, and the other responds that the grass is green, you can never know if there is some hidden message in their communication. Encrypted information can always be "tunneled" through unencrypted channels. Even if you ban all computer apps with encryption, you ban people from making own apps, make every person wear a microphone and a camera 24/7, there will always be a way to deliver information from one person to another without anybody else knowing about it.

Actually, banning encryption apps may be good for privacy, because you never know if the app maker made some backdoors in their encryption method and he already sells your information to somebody.

The solution is a drivers license to use the internet. No more anonymity. Can't create backdoors(that'd be like removing the steering wheels from cars), but the internet is a public space like our highways. You can cause a lot of damage there, so you should have a license or passport.

Controlling speech or disabling crypto is very Orwellian.

obfuscating crypto is trivial. God Kay must either be as dumb as Trump acts or think we are all stupid.

Much as I agree with the general principle, this argument is flawed (except insofar as 'good guys' do not remain 'good'). What exactly is a crypto key, if not a "back door that only lets X go through it"?

I don't want to make this too political, but I am really curious about people's perspectives. Please stay like close to the center in your response to my question, obviously it is easy to make a kneejerk response. (I address this at the end of this comment.)

So, we heard about a recent terrorist that he was banned from his Mosque (for being too radical), reported by other Muslims, and that the FBI also reported him to the UK.

What do you guys think that the UK should do in this case?

I mean let's take an absurd example of a petition by 50 people from mosques and community saying, "Hey, this person is not a member of our community but is spouting radical nonsense and wants to commit terror."

In this case what should be done?

I am drawing a complete blank, because it doesn't take long to prepare terrorist acts but until you do them you aren't really a terrorist.

The only thing I can think of is something that even I can see would be a joke. If the government came to me and said, "hey we received over 12 people asking us to watch you because you are a terrorist. We'd like you to voluntarily participate in civilization training to better understand why terrorism is wrong. You'll get 200 for participating."

But I can hardly type that without it sounding like a joke. I mean there's politeness but this sounds just absurd. (I added the 200 part because I think there is no way they would agree otherwise. But if it's not voluntary then that doesn't sit right with me either.)

So in this actual, real-world case, what should the UK have done?

I don't think increasing surveillance so that you catch someone between the 45 minutes it takes them to inform themselves how to perform terrorism, and going and doing it. People are pretty strong and powerful and have a million tools of every kind, more surveillance couldn't possibly help here, I mean the reaction time would have to be like seconds from when someone chooses to start googling how to do terrorism to making a concrete enough plot to be criminal. It's just not a solution.

So returning to my question -- for the case I mentioned, what should we done?

Note: I understand that it is easy to make a flippant, knee-jerk response. For example, it is easy to say, "if someone reports a muslim for radical speech the reported muslim should be thrown in prison without a trial, and throw away the key". I really don't want to start a thread like that so please don't respond if you have attitudes like this: I've represented your response in this last paragraph and ask you please not to derail this thread on this subject. Yes, it is very easy to deal with if totalitarianism is okay. I specifically say this because I know people in real life who would make exactly this response or one exactly like it. That is not my question and I've represented this position in this paragraph, no need to repeat it, and you would just get downvoted. In this comment I am asking for people's practical ideas that are close to the center, if they have any. Thank you.

I've taught college. This study is wildly unsurprising. I've written about this in various places (e.g. https://jakeseliger.com/2014/04/27/paying-for-the-party-eliz...), but most colleges have evolved majors and paths that are designed to move students through the system, collect their tuition money, and graduate them.

In re-reading the previous sentence, I think I sound opposed to this. I am a little bit, maybe, but mostly I'm opposed to the way no one explicitly tells this to students. A lot of the brighter or better prepared ones figure it out, but many, it seems, never do.

I teach community college biology, and I agree that we're really bad at teaching critical thinking. But the Collegiate Learning Assessment (CLA) cited by this article was graded by a computer last time I checked. Here's a direct quote from one of their papers a few years ago:

"Beginning in fall 2010, we moved to automated scoring exclusively, using Pearsons Intelligent Essay Assessor (IEA). IEA is the automated scoring engine developed by Pearson Knowledge Technologies to evaluate the meaning of text, not just writing mechanics. Pearson has trained IEA for the CLA using real CLA responses and scores to ensure its consistency with scores generated by human raters."

Most of you are more knowledgeable about technology than I am. So I'll leave it to you to decide if using an algorithm to grade an essay-based exam of critical thinking is a valid approach to this problem.

Why wait for college to teach critical thinking skills? My father is a prof at a major university and we grew up discussing ideas, but high schools can teach critical thinking skills and problem solving. My high school was owned by the university and we did a lot of critical thinking.

Jewish religious schools (Yeshivas) teach critical thinking skills by studying the Talmud [1]. A number of Yeshiva students take the LSATs and skip college altogether to go directly to law school so powerful is the process of learning Talmud.

Basically Talmud is full of (often) legal arguments and stories and a lot of time is spent on thinking through/arguing edge conditions (e.g., a piece of property is found overlapping public space and private space).

The point is that college is absolutely not necessary to teach critical thinking skills and in my opinion this should be started at a much younger age.

Incidentally, I have found even graduates of Ivy League schools seem to not understand basic fundamentals. For example, in Economics, they don't seem to understand why housing is so expensive in certain cities and don't seem to have the analytical skills to understand why prices are high.

The problem isn't critical thinking skills. You can get together any 5 jokers and ask them 'what's the best way to build a backyard patio?', and they'll all start stroking their chins. But when thinking critically interferes with some sort of strong emotion, or pre-conceived belief system, then forget it. It doesn't matter how much education you have, if entertaining a particular problem causes your amygdala to start firing then your ability to think critically is out the window.

It was 7th grade, and I was in a home-ec-like class. The day before we had learned how to order from mail order catalogues (showing my age there). This day the teacher passed out magazines, told us to pick an ad, and then find 5 ways it was misleading.

Easy, right? Sex, money, Fame, these associations are in a bunch of ads, and everyone knows about them. But it turns out that 5 is a pretty high number for some ads. You had to really look. And even that didn't change anything for me.

Then we presented to others. And one girl showed an ad for Bayer, and said "4 out of 5 doctors recommend. Who picked the 5 doctors?".

My mind was blown. I think it was the moment where I considered myself a good judge and then was shown a point I had never even considered. I had thought all about having careful wording on the survey, not mentioning any negative results, but I had never considered that the very basis of it could be manipulated to the point of meaninglessness.

I think that moment of fundamental distrust, in both what I'm being told, as well as in my own certainty, did the trick.

Perhaps too well - I'm hypersensitive to being manipulated. I rejected any career that involved deliberate group manipulation, such as military, law enforcement, and legal. I recognize that EVERYTHING is manipulative to some degree and can't be avoided, but I try to avoid anything that does it very explicitly, so I can't for example, watch most documentaries. The moment the vocal pacing and background music starts something in my brains starts shouting "YOU ARE BEING MANIPULATED!" and I try to fight that manipulation, which is largely impossible so I generally end up turning it off. Ditto political speeches (I'll skim the transcripts, thanks), most anything out of marketing, etc.

I don't really think we can "teach" critical thinking, but we can provide opportunity for it again and again. I think our school system in the US (no experience elsewhere) is very poorly set up to do that, be it college or pre-college.

> Results of a standardized measure of reasoning ability show many students fail to improve...

The irony of this sentence is painful. The entire reason most colleges fail to improve reasoning - something everyone has known for a while now - is because of standardization and industry-oriented training. They've transformed into advanced trade schools, caring more about selling products (graduates) than producing well-rounded, capable leaders. The entire idea of a standardized test is to produce the very metrics they use to sell those products.

And you know what the worst part about it all is? They are using the old college model (4-year baccalaureate programs) to do what could be done just as effectively in about two years. So they aren't even good at what they are TRYING to do.

Every time a college sucks article gets published I think the same things:

Look at the college enrollment rates since the 1960's.Look at the tuition rates since the 1960'sLook at the distribution of majors since the 1960's

Then precede to look at the labor market. It all becomes very clear. There's millions of great young people roaming the halls of colleges who are not engaged in higher learning. Great young people who would develop critical thinking skills from work, family and good on the job training.

Many of these young people are told from an early age that college is a must in order to get anywhere. Whether that's true, I can't answer with confidence. I waited to go to college. After high school I decided to work, pay bills and taxes. In my late 20's I went back for a CS degree and am productive and happy now. Had I gone right out of high school I would have wasted a lot of time and money.

Is there even a solution to this issue outside of the family? Is the focus and quality of k-12 in the wrong place? Is it a mixture? Who knows?

Critical thinking is an important skill but I'd like to caution against this fixation on critical thinking thought in collage as some sort of beacon for society.

Critical thinking is something people develop over the years and it starts early IMO. It's not just a 4 year course. It's a whole approach to the world around you. There are many critical thinkers in my experience outside of collage. And I don't see it a problem as such.

Also it doesn't matter how good a critical thinker you are we all have blind spots and biases that makes it impossible to be critical thinkers in all contexts. Will need to look at the study to see how it's actually measuring the critical thinking skills.

Many of those who do learn critical thinking first when they get to college end up getting such a aha moment that they think critical thinking is the same as constructive thinking and should be applied to everything.

You often meet them in the big companies or management. Many of them like to play the devils advocate poking holes in everything around them but aren't able to come up with solutions themselves.

In my view critical thinking is best learned by reading philosophy and seeing how philosophers historically either improved or created new theories. Because here critical thinking and constructive thinking goes hand in hand. If you read the right progression of philosophers through time you end up understanding how they didn't just critique but put forward their theories which could then be critiqued.

In my view critical thinking without constructive thinking is as big a problem as no critical thinking.

The thing that most struck me after signing up for a few college courses this past semester for the first time is how little emphasis there is on actually learning the material. Especially in math classes. The entire focus is on passing a test. It seems like the entire system is just set up as a means of "testing" whether you already know enough to pass a given course, rather than the focus being on learning and developing new skills.

I went to MIT, and I'm pretty sure everyone already had critical thinking skills. In fact, I just assumed that's part of what the admissions office was looking for.

> at least a third of seniors were unable to make a cohesive argument, assess the quality of evidence in a document or interpret data in a table

Is this what defines critical thinking? Because if these are the skills they want to teach, they should just explicitly teach them. Philosophy taught me a bit about arguments, but it wasn't writing class. In writing class we wrote, but they didn't teach structured arguments.

Personally, I loved solving logic puzzles as a kid, and I'd read. Also my mother raised me to think carefully and objectively. I don't ever remember being taught "critical thinking" at school though - not in college or anywhere else. I'm not aware of any workplace that teaches it either.

I've always wondered if going to college immediately at 18 is the wisest of choices. Personally I worked numerous jobs until 30 and earned my bachelors in history and political economy. I always appreciated each class and all that was offered while everyone else around me being way younger were recovering from the night of partying before. I know how I was at 18, I was tired of high school and ready to just explore the world. I did and when I went to college it was on my own dime and when it felt right.

Granted what was learned would be considered soft, nothing that could really show in the coding world.. and I get it.. you go to get technical skills to get a good job. To me though if this is what college is about then perhaps we should aim for more of an apprenticeship type set up like Germany. Liberal arts colleges can exist still, but it'll be to teach for a more mature crowd able to pay out of pocket and not being something made almost as a requirement. That's not to say you need a college degree to succeed.. I was already set up in my career at the time without any college experience. Considering now I'm trying to start an aquaculture company I probably should of majored in marine biology... then again.. I really didn't become passionate about over-fishing until I took a political course on it. Shrug.

Many comments here are about the value of higher ed generally and are fascinating to read, but I'm interested in critical reasoning particularly, and this study doesn't surprise me.

(1) Critical reasoning is rarely taught directly, especially to students who don't major in or take a philosophy course.

(2) Even when critical reasoning is taught directly, it's poorly taught. Compare an introductory text on critical reasoning from fifty years ago with one today. You will find that the former feels like it's written for a user of reasoning (which is as it should be written) and the latter is written for explainers of reasoning (colleagues or future academics, I guess?). Jargony, technical, prolix, etc.

(3) Too many professors in the humanities are influenced by a conception of argumentation-as-narrative rather than argumentation-as-truthseeking, or deny there's a distinction or that the latter is possible. Quality of indirect/incidental critical reasoning education is not what it used to be.

(4) STEM education overemphasizes formal logic. Most of our daily reasoning that's worthy of being called "logic" is informal logic.

Outside of university is more important, but things don't look great there either, for reasons everyone here is already familiar with. Echo chambers. Loss of nuance as deliberation is framed in terms that can easily be liked/hearted/shared/retweeted. Curious what, if anything, folks here think could be done to turn things around.

It isn't just that people aren't taught thinking skills, it's that people are actively attacked and coerced into suppressing that kind of thinking style. Going through normal public schooling systems most people are taught during key developmental phases that questioning the world around you causes punishment. If it isn't your parents, it's your teachers or the government constantly shoving stupid thought-suppressing ideas in your head. During these phases your immune system learns to associate free thinking with abuse and pain. When you are an adult it becomes very difficult to undo this. An adult who gets very emotional when their beliefs are questioned likely got abuse and punishment when they questioned the beliefs of those around them in youth.

The article is about college but what about the previous 12 years of school. Why don't students learn critical thinking during those years. 12 years of school and students lack learning skills, critical thinking skills and what burns me most high school graduates don't have a marketable skill they can use to get a job if they have to start working.

Last year's election focus on some very irrelevant subjects yet our graduates aren't ready for the world they have to face. School reform should be a hot subject yet it's not at the top of the list. Start up jockeys take note the US school system is ready for disruption. I hope it happens soon.

Not the biggest fan of higher ed, but why put this on the colleges? Why not the high schools? Eighteen was practically middle-aged in the 19th century. We just keep dropping that bar and infantilizing people so much that WSJ will be writing this about PhD programs in a few more years.

I went to Boston University for undergrad. When I went, tuition and board were 46k, which I thought was absurd. Fast forward a decade and it's 70k. At this rate, in less than 10 years it will be 100k per year. How does any of this make sense?!?!?!?

> Thomas Jefferson proposed "establishing free schools to teach reading, writing, and arithmetic, and from these schools those of intellectual ability, regardless of background or economic status, would receive a college education paid for by the state."

> In the United States, the first free public institution of higher education, the Free Academy of the City of New York (today the City College of New York), was founded in 1847 with the aim of providing free education to the urban poor, immigrants and their children. Its graduates went on to receive 10 Nobel Prizes, more than at any other public university.

> City's academic excellence and status as a working-class school earned it the titles "Harvard of the Proletariat," "the poor man's Harvard," and "Harvard-on-the-Hudson." Ten CCNY graduates went on to win Nobel Prizes.

If I'd used my critical thinking skills to go to an HVAC vocational track, following years of hourly / labor Summer jobs as a teenager, and took out a business loan half of what I've spent on University studies, I'd probably have a small empire by now.

Universities are great for a liberal arts study track, but that's kind of it. I'm not even sure most require Students to study Retirement Planning or "How to Understand a Car Loan" in practical terms.

It took me a long time to really develop critical thinking skills. I'm still behind where I think I want to be. One thing I've noticed is spending more time on the right sites, like HN, has helped tremendously. Even if they aren't perfect. Another thing that has really helped is spending more time with critical thinking friends.

So what really makes the top colleges so great. Is it really just the professors and curriculum or is the real value in that more bright minds are all grouped together.

I'm working on critical thinking with my 2 year old. If he can't think critically by college then I've failed.

I spend time each day practicing discussing things with him and he has already come to assume if he wants anything in life he will need to talk about it as throwing a fit or whining ensures he does not get anything.

People assume because kids haven't been trained, they can't be trained. So they wait until they are older (or even at college level) to begin training. Really bad idea.

Not having read TFA due to paywall I've noticed that a hell of a lot of people deriding critical thinking really mean something like: "So many people disagree with me about the environment/healthcare/religion/liberty/whatever and I just know they're wrong so they must be unable to think critically."

They, them, over there. There are whole courses run on "Why other people are so unfathomably wrong." [1]

Maybe the TFA says so, but maybe we should actually look at our own thinking. What facts we'd actually not bet on yet find it ok to use as opinion foundations. How many ways could we be wrong in what we think. It doesn't seem to be popular (or I'm missing the point, am not up to date with the zeitgeist, or thinking is totally overrated anyway or ...)

I don't think I was forced to develop critical thinking in a systematic way until my PhD, where I actually had to produce ideas that would withstand scrutiny by both professional scientists and experiments. I went to a magnet high school and then a liberal arts college and the emphasis seemed largely on preparing for tests at both places. It is probably true that I could have developed my thinking more at an earlier stage if I had been more self-motivated. Needless to say, the PhD was a painful experience.

Is it possible a large portion of the observed behavior of critical thinking personality based (ie Jung Theory)?

That is to say could it be certain personalities are more likely to analyze. They may not be smarter or even more educated but are more drawn to problem solving and analysis than others.

So even if the individuals were taught to perhaps be more logical, detail oriented, not reactive, etc it maybe incredibly unnatural such that a normal psych test may not elicit the behavior.... just a theory... I'm probably wrong.

While it seems critical thinking is a good thing it might be in some cases detrimental particularly if it requires more time. That is reactive individuals who prefer not to rely on critical thinking might be able to make critical decisions quicker (albeit possibly incorrect).

You could possibly achieve interesting results with a single handset to keep in your pocket as you go about your day. The Samsung Galaxy S3 is ideal due to the fact that Android apps are written to access low level data from it's baseband which is normally not available to end-user applications.

In fact there is a company that sells re-modded S3's at a decent price for this exact purpose [1].

Save some money and find an old handset and load on free IMSI catcher detection software. [2]

EDIT: It seems SnoopSnitch [3] which is used in the SeaGlass project works on rooted Android phones with that use Qualcomm chipsets.

It would be interesting to push this out to the crowd of people interested in privacy. Maybe we could put a setup like this in our own cars, or at least run an app on our phones. It would really harm their surveillance efforts if 1000's of people were contributing to a global map.

">So there are factory methods in each cellphoe where you can get the tower ID and RSSI and other data from the tower... what is needed is an app that actively logs ALL that data with the GPS location of the phone regularly and pushes it to a DB in AWS - and you keep capturing all that data, and you compare geo-loc from al the phones and the towers they see/connect to when within that cells signal domain - the app should be able, after time, to "know" which tower it should be connected to based on GPS as it moves into and out f each cell... you get an alert if the phone connects to the non-predicted cell signature.

The main issue I found in algo and financial aspects of programming is that the market is a zero sum game, and my intro knowledge of finance and algorithms, even when I know python, are no match for MIT PHD Quants who does it full time. There's no real way to compete with that, and therefore I would lose money, even if the data showed it might be successful in the future, firms and full time workers on algo trading would simply be faster, more focused, have more funding, and be able to quickly and constantly adapt at the scale an individual could not.

So despite the fact that the subject is interesting, I'd consider it a waste of time to try and gain anything but a basic understanding of the industry and how algo trading works.

Personally, I struggle to see the competitive advantage Quantopian brings. They use retail brokerage platforms to facilitate trading, which rules out anything close to HFT. Then, they are tied to any financial data vendor (Morningstar in this case) to not offer too much visibility on the underlying data. As others have mentioned, this makes it tough to validate aspects like adjusted vs. as reported earnings, how delistings are handled, etc. From my experience, getting/making sure the data is accurate is a ton of work, even if it is from good sources and you can see all the actual data. The moment an investor/trader on the platform gets traction is the moment they want Compustat data, exchange data, Bloomberg for fixed income, and will trade through Instinet/Flextrade, etc. The moment the platform is successful is the moment Morningstar could pull the rug out from them. If someone has better understanding or knowledge on Quantopian in particular, I'd be interested to hear why.

QuantConnect & LEAN gives you ability to do tick->daily resolutions; for equity, morning-star, future, option, forex and cfd trading - all with a fully open source project which includes samples of data to get you started.

The grunt work is still done in C# so its faster than other full python based backtesting engines. Edit: I'm the Founder @ QC.

I went to the Quantopian conference for their basic training on algorithmic trading. This blog post was pretty much what they covered, intro to pandas and a simple strategy. There is a lot of educational material on their site too (which is what you ended up getting in the paid training).

My biggest thing with the Python for Finance books - I know Python, I want to learn finance. All these books are the inverse of that, for people who know finance and want to learn Python. There is a good site for quantitative economics [1] that has tutorials in Python and Julia. I would love a mathematics of finance book that had the examples in Python.

This is nothing more than gambling. Let's say you have 20k to play with. You would be far better off in the long run to put 15k in Wealthfront and use the other 5k as a bankroll of 25 buyins for 1/2 no limit holdem to learn the game.

It is a great article, but why on earth someone will use a service like Quantopian or similar service?

They are your competitor and who will prevent a disgruntle employee or a hacker to steel your successful trading strategy?

Just buy some data from eBay, you can get 20 years of historical stock market data for less than $100 and you can test any trading strategy or idea imaginable, including trend following, buy and hold ETFs, etc...

The barrier of entry is pretty low and you can develop a great lifestyle business with no customers, employees and investors around that...

When you get used to this kind of high quality metadata, it's just so so sad to see how companies like Spotify treat metadata. As an example, look up Bob Marley & The Wailers on Spotify and try to find original releases, and then compare that to the list found here:

I've been contributing data and code to MB and it's sibling projects for over two years now and the community has been great from day one!

Just to name a few of the other projects, there's AcousticBrainz [1] collecting acoustic information which may be pretty useful for machine learning, CritiqueBrainz [2] for collecting user reviews of songs, albums and more, ListenBrainz [3], an open scrobbling service a group of people including former last.fm employees initially hacked together in a weekend, and finally BookBrainz [4], which tries to be what MB is but for books.

During the last year the people running MB have worked on getting companies using the data to support the project resulting in a quite impressive list of supporters [5] including big names like Google, Spotify and the BBC.

MB has also collaborated with our fellow data nerds over at the Internet Archive to create the Cover Art Archive. [6]

In general the project is run by people who equally love both data and hacking. Feel free to stop by on the IRC channels #musicbrainz and #metabrainz on freenode!

Wow. This brings back memories. At uni in the early 2000's I hacked up a geeky "last.fm" inspired music stat service. The idea was to be able to reliably track music being played without needing a plugin for winamp/foobar2000/other media player and without needing the mp3 file to have meta data.

I lightly modified a version of the Filemon driver from Sysinternals and wrote a little C program that used the driver to monitor for mp3s being played and then grab the perceptual audio hash of the file using trm.exe from Musicbrainz. It then sent the resulting fingerprint off to my website (written in glorious PHP3 no less!) and you could login with an account to see stats on the music you'd been listening to (done with meta data pulled from Musicbrainz).

Surprisingly, it worked reasonably well ...though very sure if I looked at the code now I'd run away screaming.

What (and when something) ends up on the first page never ceases to surprise. I've used this I don't know how long. Could it be 15 years? Their official tagging client (Picard) is OK, but I prefer tagging using Mp3tag and the MusicBrainz database.

I'd just like to reiterate how utterly amazing MusicBrainz is. It's so extremely useful that I decided to make it the backbone of a new playlist format I developed[1], one which (roughly) uses MusicBrainz IDs instead of filenames for playlists.

This makes playlists resistant to filename changes, moves, or even losing all the actual audio tracks and having to buy them again, all because MusicBrainz provides so accurate metadata.

I love MusicBrainz and have been using it for a project of mine for the past few years. In the course of developing that project, I ended up making a GraphQL interface to the MusicBrainz API: https://github.com/exogen/graphbrainz

You should try out the demo queries linked from that README if you want to get a sense of the depth of information available in their database.

I've been using MusicBrainz' Picard to tag my music files (that I acquired 100% legitimately, I assure you.) for a few months now.

They seem to have everything I throw at them, except for:1) Extremely new releases (on the order of a-few-hours-after-release)2) Some niche songs that haven't been officially released (soundtracks for some Korean television shows)

It's good that MusicBrainz exists as open data project and continues to stand up against Sony America & Sony DADC defacto monopoly on audio+video metadata and digital supply for the media industry.

MusicBrainz is the third project of it's kind. Two previous older projects got bought by the media industry (Sony and Magix). Such a database gets useless if it doesn't receive updates.

First there was CDDB, short for Compact Disc Database, is a database for software applications to look up audio CD (compact disc) information over the Internet. This is performed by a client which calculates a (nearly) unique disc ID and then queries the database. As a result, the client is able to display the artist name, CD title, track list and some additional information. CDDB was invented by Ti Kan around late 1993 as a local database that was delivered with his popular xmcd music player application. CDDB is a licensed trademark of Gracenote. In March 2001, CDDB, now owned by Gracenote, banned all unlicensed applications from accessing their database. As of June 2, 2008, Sony Corp. of America completed acquisition (full ownership) of Gracenote. https://en.wikipedia.org/wiki/CDDB

Then there was freedb. freedb is a database of compact disc track listings, where all the content is under the GNU General Public License. To look up CD information over the Internet, a client program calculates a hash function from the CD table of contents and uses it as a disc ID to query the database. If the disc is in the database, the client is able to retrieve and display the artist, album title, track list and some additional information. It was originally based on the now-proprietary CDDB (Compact Disc DataBase). On October 4, 2006, freedb owner Michael Kaiser announced that Magix had acquired freedb. On June 25, 2007, MusicBrainz a project with similar goals officially released their freedb gateway. The latter allows users to harvest information from the MusicBrainz database rather than freedb. https://en.wikipedia.org/wiki/Freedb

Heh. I was on a team of Amazon engineers in Edinburgh back in 2007 who were tasked with building "another IMDB that we can sell ads on", and we ended up using a MusicBrainz dump to start up a music encyclopedia website. The idea was to take the raw data but organise it in a more user friendly way, add easy click-to-edit user participation and gamification, etc.

I remember seeing Robert Kaye wandering around the office when he visited us to talk licensing terms, although as the most junior employee I didn't get to talk to him myself. We also chatted to Col Needham, the founder of IMDB, and asked him "so, how do you become a massive media-encyclopedia site?"; his answer was "it's easy, just start 17 years ago."

Really we had no idea what we were doing, and although we got some surprisingly dedicated users (we sent T-shirts to a couple who'd contributed hundreds of thousands of edits!), the site folded after a few years.

I'm very glad to see that MusicBrainz outlived us and continued to thrive :)

I used the MusicBrainz API a while back for a side project that got me sued for some reason (http://tcrn.ch/2rEox3h).

As I recall, it was pleasant to work with and did what I needed it to quite nicely, aside from a feature that my code had depended on being removed anonymous/unauthenticated search at which point the project was already basically dead and not worth trying to fix (that was just the last nail in the coffin). In any case, nice to see that it's still active.

always nice to see something of such utility pop up. musicbrainz has most assuredly been around for what seems like forever now, and there's a reason for that. their tag database is second to none as far as im concerned. unfortunately for me, the only music I keep locally is my own music that I've made, and I can almost guarantee that that wouldn't be on there. plus i tag all my music properly to a point that might seem religious and obsessive because I hate music files without metadata (which is why I export in mp3 as well as wav; wav for higher quality, and mp3 for labeling purposes; I could probably just use flac but compressed audio like mp3 also has the benefit of being less space intensive).

Whilst it is great to be able to tag music files with masses of MB metadata, I have a feeling that the true value of the MB database has yet to be realised.

Because of the underlying design and relationships between albums and recordings and musical pieces (or works), once it reaches some level of critical mass you can start to mine the data for things like:

Who has recorded versions of Vivaldi's Four Seasons Spring in London?

Which artists have recorded both Greig's Piano Concerto and Chopsticks?

I've been trying to figure out which music metadata database is worth my time "improving", since there are three that are commonly used. MusicBrainz, Discogs and Rate Your Music. I use Discogs currently because you can expect high quality metadata, and I use that data in a Foobar2000 plugin to tag my music correctly.

It's the constant questioning I do for Wiki sites, since there are multiple for most subjects. Am I alone in this struggle? I wouldn't mind being talked out of using Discogs for the sake of creating / managing metadata that will be the most useful.

One of my side projects is a music recommendation system. Music brainz has been great for this. Tying together all the music services out there. In addition the biggest perk is you can do a slave of their database, and have it replicate on an interval.

256 byte packet and a 192 bit authentication hash, why use fast flux dns to run C&C on your botnet when you can just make them twitter followers.

EDIT: And in case that isn't clear. Imagine you have a botnet, and all of the individual members create a twitter account. All of the twitter botnet accounts follow the 'master'. Who can tweet a command (and corresponding authentication key) to the botnet to say "follow chuck and do up to n things for him, here is his public key". Now Chuck suddenly has all these followers and when the time is right he tweets out his command, "ddos my greatest enemy" and adds his 'proof'. Off they go and blast his enemy. If he was only allotted one command then they all un-follow him.

If you really want to put binary data on Twitter, why not encode it in an image? You could probably get several tens of kilobytes of binary data reliably encoded in a JPEG of the maximum size Twitter allows.

Neat. I see a lot of mention of Twitter but the first thing I thought of was packet compression. A ~50 byte packet shaves off around 20 bytes with this. Those are good savings although I haven't looked into the encoder / decoder enough to know if it's worth the tradeoff of having to translate every packet on both ends. I can also see UDP datagrams being a pain in the ass to work with when you're throwing around streams of Unicode characters.

Overall though, I like it and look forward to Base131072 being possible!

I've been participating in Google Contributor since it's inception, this is like the 3rd relaunch of the service. I have no doubt it will fail just like the past iterations because the fundamental flaw is that Google is the heart of the system and they're unwilling to extricate themselves from it. I don't think this type of service is the way forward and the solution will not come from Google or any other ad provider for a number of reasons. The first is that Google is not the only ad network and no one wants to be cut out. The second is that this does nothing to address the privacy or security issues people have today that drive them to ad blockers.

There aren't just a handful of Ad networks, there are thousands if not millions out there. On top of that, they utilize each other to push out ads in a horrid rat king like incestuous jumble. Any payments to avoid ads served by these companies would require compensating all of these companies, the end result predictably would be movie studio accounting that leaves the content provider with nothing in the end.

This setup does nothing to address the privacy issues people have with companies like Google tracking their comings and goings. Google is still at the heart of this system and still knows everything about you. To get any benefit from this system actually requires you to embrace Google. People want to maintain their privacy, they don't want to login to Google to get rid of ads.

It's easy to envision a system utilizing a crypto currency and a digital wallet held by your browser that you fill occasionally and that prompts you to pay a site similar to the manner in which Location Services work simply based on a meta tag a site provider puts in their page head containing their wallet and request pay amount and schedule. It's impossible to imagine Google, Apple, Facebook, or anyone who wants in your pants to allow themselves to be cut out of a revenue stream by such a system. Companies like this are double dipping by charging everyone else to be the broker and also by being the service provider being paid.

I honestly don't know if an ad free web is allowable. It's technically possible but everyone who isn't the content creator is going to do everything they can to stop it form happening.

Whether it comes from Google, or someone else I believe this is the only way the web survives.

Content creators need to be able to charge different amounts for different quality content.

In depth, well researched reporting needs to be able to earn more than a buzzfeed article. That's not possible with a flat "per-eyeball" cost, where the revenue to the content creator is uncorrelated with the cost to create or the value/quality of the content.

I wish it weren't google (who also already owns advertising), but someone large is the only one who can make it happen.

This doesn't seem that useful to me as only a small number of sites (none of which I visit) support it.

Hypothetical question: If I were allowed to bid on my own ad impressions - and if I won an auction, no ad would be shown - how much would it cost a month for me to see no adverts? (I realize this is heavily dependent upon the type of sites that are involved, so I guess take the average HN user as an example).

One has to mention the Brave browser for comparison: https://brave.com/ -- similar concept but using Bitcoin. The accounting at https://brave.com/publishers.html looks like you as the reader can DECIDE whether you want to issue micropayments to a particular site or not, and publishers don't have to explicitly opt-in beforehand (thereby instantly including all of the web). A publisher won't be able to charge different prices, but a publisher with goodwill (hence users opting on their own to pay that publisher) will make money. This seems like a better execution.

Google Contributor was a program run by Google that allowed users in the Google Network of content sites to view the websites without any advertisements that are administered, sorted, and maintained by Google.

The program started with prominent websites, like The Onion and Mashable among others, to test this service. After November 2015, the program opened up to any publisher who displayed ads on their websites through Google AdSense without requiring any sign-on from publishers.

Since November 2015, the program was available for everyone in the United States. Google Contributor stopped accepting new registrations after December 2016 in preparation for a new version launch in early 2017.[1] On January 17th, Google Contributor was shut down. As of January 17, 2017 8:40 AM no replacement had been announced.

Remember when we were sold that ads payed for content so that could be free? Now you can pay extra to get ads anyway, the non Google networks don't care. The web is turning into cable, and it only took a few years.

Wow, how much does this have to do with their announcement to add ad blocking to Chrome (only for other networks ads, I'm sure)? How have they not attracted regulator action yet? You'd think the EU would be all over that kind of behaviour.

Everyone is forgetting that Google can provide such tool because of Chrome (60% market share); they don't need to track you. They already are.Google it's tightening it's grip on the web. Yesterday the announced that they will apply the Better Ad Standard by 2018. They said they'll ban intrusive ads that block the user from the content, ads that play sound automatically, and flashy ads... Now "flashy" is so vague.

I really think a much better system would be for websites to adopt GNU Taler, and allow people to conduct micropayments using digital cash. The system is about as seamless as Flattr, except that the website can actually charge an amount rather than a fraction.

But, most importantly, Flattr guarantees the anonymity of consumers' transactions. So the big G won't have a log of what websites you paid to access.

In simple terms , is this a moral way rather than using ad blocker ? Rather than using ad blocker I get this pass , where I won't be seeing ads and at same time feel I am letting publisher earn some money ?

Somebody should do this for mobile game advertising. A player buys a $5 card with allows them to skip ads in all participating mobile games until such time as the $ is depleted. If that card worked across lots of games, it would be a great convenience. Considering ARPDAU from ads for mobile games isn't often more than $0.01, it would be a win-win for gamers who hate ads and want to play free mobile games.

I like the technology and pricing model, but I don't think that it is being put to its best use. I think a better use would be for news sites that require a login to view articles. At present I usually just go without viewing many as I can't afford to sign up to 10 different sites where I might view a couple of articles each week. If I could make a one off payment per article then I'd be all over it.

So, this is kinda like Blendle, but for the open web. I'm not too sure if this'll fly. Personally, there's a certain flinch, a certain decision before opening any article in Blendle as I evaluate if it's worth the price mentioned. I'm sure the same will manifest itself, perhaps in uglier forms if I set out to use Contributor. Do you guys think so?

Google: Nice internet ya got there. It'd be terrible if something were to happen to it like, say, ads everywhere. But if you just give me a few dollars I'll protect ya from 'em.

Edit:

>How it worksYou load your pass with $5. Each time you visit a page without ads, a per-page fee is deducted from your pass to pay the creators of the website, after a small portion is kept by Google to cover the cost of running the service. The price per page is set by the creator of the site. You will be informed in advance if a site creator changes their price per page. Contributor is easy to update: change settings and add sites or remove them from your pass at any time.

I love the sound of this, I just do not like the idea that it is Google doing it. It feels ... dirty somehow. A third party doing this I have no problem with.

im not sure this is a good idea (as a consumer) anything which injects extra buying decisions in my life seems like a bad idea. Imagine having to wonder if I really wanted to spend that 0.01 cents on the next page of popular mechanics or not. I'd rather pay more for say, unlimited monthly access

My idea in this space is to launch the no-ad-network. Basically you would pay them to bid on your behalf across all exchanges for your cookie. It would either display nothing, nice photos or maybe even some customized data about the site you are on. Everyone wins.

It simply isn't worth it as Google doesn't actually enable content creators unless you create content on their sites. We need a better way to say give content creators something akin to Patreon but easier.

This is terrible. This is where Google is trying to control the web even more than they do already. If this sticks around, if this actually happens, then this will destroy what so many have worked to achieve.

Let's actually sit down and think about this. If this happened, there would be some big changes to the web. (Note- this is a quick response. I should write a real paper about this)

First of all, now that people would be paying for no ads, websites will overload their sites with ads, because Google will have the solution that "everyone is choosing anyway". It would make it "OK" to have tons of ads on your site, because there's a solution.

Then, your web experience becomes terrible. For a "small fee", you can keep a "nice" experience- one that used to, and always should, be free. However, if you don't give Google your money, then your web experience is going to be so filled with ads that content will take forever to load. And even if it does load, it'll take 30 minutes to read an article, because every 30 seconds you'll have your regular add popups. Then, you'll have the sidebar ads that follow you. Or the mobile ones that get in your way as you scroll. You won't be able to view content, because ads will have taken over even more than they already have.

Now, in this terrible future, what about those that can't afford Google's "small fee"? They'll be condemned to the ad version of the entire web- one that doesn't load properly, that people have started to discard. The true "web" will be the one where you pay to view. These people won't have access. And, if they aren't able to pay the small fee, then most likely they're accessing the internet from a slow connection. Maybe it's a library that can't afford the fee either. Or maybe it's in the home, and they can only afford small internet speeds and second hand computers. Not everyone has the money to buy a brand spanking new macbook /pro/air from Apple.

So now, 5 years down the road, there's two versions of the web. The one that Google controls and tracks 100% (oh yeah, we didn't even get to that yet), and the one that is so ruined with ads, that the people make a decision. A big one. Let's just get rid of the ad version of the web. You can't use it anyway, so there's no use. The only way to go- is to give your monthly payment to Google, so that you can access the web. Now, you gotta pay to view the web at all. The free web is gone. Google took it away.

There's also Google, sitting on their even exponential growth pile of money, tracking every web user. Sure, there may be other competing services that let you "into" the web, but they're also gunna track you. No doubt about that.

There's so much more that I haven't even said. How will websites determine how much a "view" is? What about requests that are half loaded? How will you know how much it costs to view a webpage? Not all web content is created equal. Definitely not.

I really hope this works out. As much as I dislike advertising...and Google...something has to happen. The "ad blocker" - "ad blocker blocker" arms race is patently stupid. There has to be a way to get money to content providers so they can opt out of the madness. Google will still be able to provide them with all the sweet sweet surveillance data that they thrive on.

We also thought about this at Steady (www.steadyHQ.com). We built a system for recurring payments to independent creators and some of our publishers allow users to pay for not bugging them with their adblocker detectors. This generates additional turnover from users that normally do not get served any ads b/c they use adblockers, but I believe such an offering should encompass all ads, not just Google AdSense (why would you pay just to reduce the amount ofads), and it should include removing paywalls (e.g. at a higher price point, to monetize "superfans").

I'm not sure how much value I derive from ad-funded websites, besides maybe Google Search. There was a time most websites were free and run for the good of the community, not for profit and not as a full-time job. I could buy my high-quality content in the form of magazines and newspapers. Maybe that's the paradigm worth investigating - community generated, ad-free content on the web but paid-for, bundled (magazine-style) high quality content for sale. Delivered not through the browser but through some other, open platform (think zines, PDFs, epub/mobi).

For me, this is what an ideal web would look like. My ad-blocker would barely get a workout, and I'd happily pay for bundled (not pay-walled, bundled, downloadable) content as I did for many years with magazines.

No-one wants high quality content to disappear, but advertising and web paywalls are not the only options.

This post doesn't even mention the easiest way to use deep learning without a lot of data: download a pretrained model and fine-tune the last few layers on your small dataset. In many domains (like image classification, the task in this blog post) fine-tuning works extremely well, because the pretrained model has learned generic features in the early layers that are useful for many datasets, not just the one trained on.

I think the problem isnt that you cant solve problems with small amounts of data; its that you can't solve 'the problem' at a small scale and then just apply that solution at large scale... and that's not what people want or expect.

People expect that if you have an industrial welder than can assemble areoplanes (apparently), then you should easily be able to check it out by welding a few sheets of metal together, and if it welds well on a small scale, it should be representative of how well it welds entire vehicles.

...but thats not how DNN models work. Each solution is a specific selection of hyperparameters for the specific data and specific shape of that data. As we see here, specific even to the volume of data available.

It doesnt scale up and it doesn't scale down.

To solve a problem you just have to sort of.... just mess around with different solutions until you get a good one. ...and even then, you've got no really strong proof your solution is good; just that its better than the other solutions you've tried.

Thats the problem; its really hard to know when DNN are the wrong choice, vs. you're just 'doing it wrong'

What's most concerning about @simplystats blocking activity is the chilling effect it has on discourse between differing perspectives. I've tried to come up with a rationale for why highlighting the most recent evidence in reply to someone who sympathized with Leek's original post (btw, @thomasp85 liked the tweet) is grounds for blocking , but I can't come up with a reasonable idea.

Further aside, is irq11 Rafael Irizarry?

Update: after emailing the members of @simplystats they have removed the block on my account and offered a reasonable explanation. SimplyStats is a force for good in the world (https://simplystatistics.org/courses/) and I look forward to their future contributions.

It's an interesting conversation but really weakened by failing to take on the generalization problem head on. This is something I see in a lot of discussions about deep nets on smaller data sets, whether transfer or not. The answer "it's built in" is particularly unsatisfying.

The plots shown certainly should raise the spectre of overtraining - and rather than handwaving about techniques to avoid it, it would be great to see a detailed discussion of how you convince yourself (i.e. with additional data) that you are reasonably generalizable. Deep learning techniques are no panacea here.

Ppl keep saying a lot without even thinking is a relative term. For images a lot means enough to get to x percentage accuracy, for OCR for a single font, a lot means 26 letters + special chars and numbers. Stop saying a lot blindly like every one underatands.

The fact that there are people "getting their jimmies up" on questions of training massively paramterized statistical models on tiny amounts of data should tell you exactly where we are on the deep-learning hype cycle. For a while there, SVMs were the thing, but now the True Faithful have moved on to neural networks.

The argument this writer is making is essentially: "yes, there are lots of free parameters to train, and that means that using it with small data is a bad idea in general, but neural networks have overfitting tools now and they're flexible soyou should use them with small data anyway." This is literally the story told by the bulleted points.

Neural networks are a tool. Don't use the tool if it isn't appropriate to your work. Maybe you can find a way to hammer a nail with a blowtorch, but it's still a bad idea.

1. The original argument is a strawman. What do they mean by "data"? Is it survey results, microarrays, "Natural" images, "Natural" language text or readings from an audio sensor? No ML researcher would argue that applying complex models such as CNNs is useful for say survey data. But if the data is domain specific, such as Natural Language text, images taken in particular context, etc. using a model and parameters that exhibit good performance is a good starting point.

2. Unlike how statisticians view data (as say a matrix of measurements or "Data Frame"), machine learning researchers view data at a higher level of representation. E.g. An image is not merely a matrix but rather an object that can be augmented by horizontally flipping, changing contrast etc. In case of text you can render characters using different fonts, colors etc.

3. Finally the example used in the initial blog post, of predicting 1 vs 0 from images is itself incorrect. Sure a statistician would "train" a linear model to predict 1 vs 0, however I as an ML researcher would NOT train any model at all and would just use [1] which has state of the art performance in character recognition in widely varying conditions. When you have only 80 images, why risk assuming that they are sampled in an IID manner from population, instead why not simply use a model thats trained on far larger population.

Now the final argument might look suspicious but its crucial in understanding the difference between AI/ML/CV vs Statistics. In AI/ML/CV the assumption is that there are higher level problems (Character recognition, Object recognition, Scene understanding, Audio recognition) which when solved enable us to apply them in wide variety of situations where they appear. Thus when you encounter a problem like digit recognition the answer an ML researcher would give is to use a state of the art model.

Case in point, the silicon valley "not hotdog" classifier which they stopped at hotdog or not due to lack of training data when in reality they could've just used a pre trained net on imagenet. Lol, I was literally cringing through that episode so hard xD

It seems weird to focus so much on es2015 features, when so many of them are polyfillable (not optimal, sure, but still) when there is such uneven support for DOM (and related browser) API features in safari.

I mean, where are service workers? Give me a nice, cross-platform way to handle offline-first and I'll be happy. But hey, it isn't like there's some kind of browser monopoly on iOS or anything, like there was in windows.

The benchmark at the bottom showing how far behind Firefox has fallen on JS performance is pretty telling. It confirms my own recent observation that Firefox is at least 3-6 times slower in practice for the particular app I'm working on.

I really wish we would stop calling it ES6. Browser vendors should be especially careful to call ES2015 by its correct name and help stop perpetuating the confusion (ES6, ES7, ES2016, ES2017, you can see how this quickly gets confusing).

Ive been staying away from most new ES6 features because when I tested them a year ago (either transpiled or with native support) they were slow as molasses in every browser I tried. Hopefully having some better benchmarks gets Firefox and Chrome developers to also get to work on this.