I'm going through the course right now, and the instructor is saying some strange things, clearly (to me) ignoring that what he's saying is only true in very specific contexts.

For example, in the video I just watched he said "the natural way to compute the distance between two vectors is using cross entropy." And then he goes on to describe some unnatural features of cross entropy. The truly "natural" way to compute distances between vectors is the Euclidean distance, or at least any measure that has the properties of a metric.

I can understand this is a crash course and there isn't time to cover nuances, but I'd much rather the instructor say things like "one common/popular way to do X is..." rather than making blanket and misleading statements. Or else how can I trust his claims about deep learning?

Would it be beneficial for me as a developer to take these machine learning courses? I took a course in the uni a while back and know the general techniques, but I'm not sure how it would help me in my career unless I'm doing some cutting edge work in the field or focusing on a machine learning career, in which case wouldn't I need to be pursuing a postdoc or something in it?

When it comes to the course itself (I've just started it) it looks nice, but the (initial) questions tend to be vague.

E.g. in the first question with code I had to reverse-engineer what they mean (including passing values in a format, which I consider non-standard (transpose!)). The first open-ended questions were entirely "ahh, you meant this aspect of the question".

For people interested, Stanford has an excellent online course on deep-learning with an emphasis on convolutional networks. [1]

It comes with video, notes, all the math, cool ipython notebooks and will let you implement a deepish network from scratch. That includes doing backprop through the svn, softmax, max-pool, conv and ReLU layers.

After that you should be more than capable to build a 'real' net using your favourite lib (Tensorflow, theano etc).

While TensorFlow may be not yet as mature as Theano or Torch, I love their tutorial: https://www.tensorflow.org/versions/master/tutorials/. It's clean, concrete, and more general than introduction to their API. (Before I couldn't find anything comparable in Theano or Torch.)

In any case, I regret waiting so long for learning deep learning. (I thought that I needed to have many years of CUDA/C++ knowledge (I have none); but in fact, what I need to to know the chain rule, convolutions etc - things I've learnt long time ago.)

How accessible is a course like this with no prior knowledge of linear algebra? I know it's listed in the pre-reqs, but with a good head for math and lots of calc, is it something that could be picked up along the way? I'm normally pretty bold about stuff like that, but I know it's a core part of deep learning / ML. If it's really necessary, if anyone has any resources for linear algebra run-throughs it would be greatly appreciated!!

I'm the founder of threadbase. Thanks everyone for your kind words. I'd love to hear any comments or suggestions for what you'd like to see next or how we can improve the user experience. We're also looking for front-end/design help, as well as help with computer vision tech. Feel free to email me chris@threadbase.com.

It's pretty well known that tees made using 1920s-era loopwheeling machines don't suffer from size changes, and age really well. But sadly these are now super expensive, and only offered by Japanese niche brands who bought machinery from American corps.

After removing the laundry, take your t-shirts and stretch them yourself, one by one, when they're still slightly wet. Grab them with two hands symmetrically, stretch horizontally, moving your hands down along the shirt. Do the same vertically, and with the sleeves.

Do not use a machine dryer, just a regular standing dryer like [1].

Put your t-shirts carefully, symmetrically on the dryer, and once dry, put them on a hanger. If you follow this, you will not have to iron them at all.

Source: been doing this for 4 years and I didn't touch the iron since. I have all t-shirts 100% cotton (though I buy only high-grammage ones) and they all seem brand new and ironed (the only exception being one particular brand whose collar looks bad unless ironed, I stopped buying that brand). YMMV of course.

Good stuff. I'm interested to see data on other cotton garments, particularly buttondown shirts.

Cotton is just a lousy fiber. On the other hand, wool is a strong and resilient fiber. It also never needs to be washed provided it isn't stained.

My wife knit a wool sweater for a close friend of mine who spent 6 months as a bosun on the tallship the Lady Washington (the Interceptor in the Pirates of the Caribbean). Fresh water is scarce on a tallship so showers were infrequent. He came home during Christmas and I smelled the sweater which he claims he never washed and it smelled fresh. Surprisingly, it also kept him warm and dry on the open ocean. I later learned that Irish fishermen have been wearing wool sweaters at sea for generations.

The "manufacturing variance" chart jumped out at me as looking fairly unnatural: there's a variation in width or variation in length but very little points that mix. Then I noticed that we're talking about just over half an inch in each direction.

I don't have a dryer, but I've still had cotton shirts (not t-shirts) shrink in the washing machine. This usually happens the first time (or first few times) they are washed at the temperature recommended on the label: 40C (104F). However, at 30C (86F) I've never encountered any shrinkage. So this purely anecdotal experience makes me believe that the temperature of water can affect some cotton garments.

This is a cool analysis, although I think every t-shirt I own is from Target (Merona and Mossimo brands) so I didn't have a single point of reference for the width and length charts. Those charts seemed like by far the most useful part of this post.

Edit: I'm curious about the downvotes. Are people appalled at my lack of taste in t-shirts? :)

I still have a problem with my big and tall shirts - I have to hang-dry them to keep them from shrinking vertically when run through the dryer (due to my body type). This backs up what I've complained about for years :)

Is there a "grain" to the fabric or something? Why not turn it 90deg and have the shirts increase in length and decrease in the chest instead? I'd prefer size to stay the same over time, but if I had to choose I think I'd rather have that.

I once worked as a consultant to help users implement some software. I moved on to the development of that software, knowing the dozens of areas that could be improved to make life easier for the users, and honestly with little effort (it's a web app, I did that before the consultancy stuff). At that point in time, that was my dream -- I wanted to help make people's lives a little bit easier, and the people I helped would be those who used our software.

After around a year or so of implementing questionable features, I attempted to get approval for updates to old, well used features to improve them (stability and convenience focused, really), but was shot down. This wouldn't sell the software, because it worked well enough, and we needed more revenue more than retaining old customers. At that point I understood that after the software is sold the customer will be too ingrained into the product to leave without financial repercussions.

A while later, we got bought out by Big Company, so that strategy apparently worked. BC doesn't give half a shit about anything we ever did, and we piled on the features release after release with little concern about anything else. I tried a couple times after the buyout to get approved for existing product improvements, but always got shot down.

I continue to find it odd how the company can be so profit oriented, and yet so averse to improvements. I suppose I'm just wrong or don't actually understand. Either way, it makes it very hard to care about my work these days.

1) Executives cut projects: a lot. The budgets are so insane for games executives need to constantly trim budgets and shift things around. It is common to walk over to an artists desk and inform them the art they have worked on for 2 years wont be used. I am convinced telling a wife her husband has passed is the same feeling.

2) The budgets have exploded. My last project for an iPhone game was well over 4million dollars.

4) Pay is low. Since you are starting fresh each project (see 5), your working knowledge of the system is similar to someone new. Promotions, salary increases, etc don't make any financial sense (see 1) unless you are a rockstar. The new kids walking in usually burn out and quit because they don't understand the massive shit show the industry is. EA's managers just grind people until they can't walk. Disney is a sweatshop.

5) NOTHING is reused. After your second project, you quickly realize the AI you created for fish has nothing todo with with your AI for a 3d shooter. The asset pipeline you created for a soccer game doesn't translate over to a racing game. Game companies are full of dead code repros. People try to create/use repeatable platforms, but then the game designer guy will walk by and say "Hey is that the newest unreal engine?". In games: Anything reused is quickly spotted as reused. This is why games that have a good series going do really well financially. GTA what like 15?

6) Success is low. After a few years into a project, someone will say: "But its not... fun". Welp, good luck fixing that. Or plan on having it rot in some terrible online store.

7) Rockstars. Executive: "OMG you wrote the AI for GTA2 in 1998??". Welp, this guy is now your boss. AND, because games are almost always a luck play - this "Rockstar" will teach you absolutely nothing.

My takeaway:

I have talked with guys in the game industry that have been in it 20+ years and asked WTF. Basically, lifers are like high school teachers. They are abused and underpaid: but they love what they do.

> No matter whats your job, you dont have a significant contribution on the game. Youre a drop in a glass of water, and as soon as you realize it, your ownership will evaporate in the sun. And without ownership, no motivation.

A good description of a lot of big corp projects. Do people working on large open source projects eventually feel the same way?

This phenomenon is not exclusive to game development. Lots of people want to work for large, prominent companies like Google or FB, dreaming of working on cool projects. But the reality usually turns out to be much less glamorous. Instead of being the guy who comes up with the next killer product or feature, you will likely end up as a small cog in a huge, well-oiled machine, optimising ads to increase some metric in the fifth decimal place.

I hope this does not happen to him, but wait until he releases a game (that might even be a really good one) and gets nowhere because 1,000 other people released a game that week. It's a tough industry now! You probably appreciate the indie side of it when you're in AAA, but I can tell you from experience you appreciate the AAA side of it when you're in an indie!

> "When your expertise is limited to, lets say, art, level design, performances or whatever, youll eventually convince yourself that its the most important thing in the game."

This is my experience, too. Without autonomy and ownership across a whole project it's very easy for people to get tunnel vision about what's valuable. This causes general harm to both the team and the outcome of its project.

I'm not sure how to lessen the effect other than perhaps by making projects small enough that they can be worked on by just a few people and using this phase to establish a kernel of good ideas and team cohesion.

Perhaps there might be another structure where the tools that are provided to the team are literally so good that the main project can be done by just a few people working on everything together. (Idealistic vision here.)

Congratulations for pursuing your dreams! I also work at a big game company, not even on a game but on an internal technology: no player will ever see directly the result of my contribution. I still feel great about my work because of reasons not important here, but I totally understand what the author says.

But feeling of being a little cog in the machine aside, some of what is said here is about failure of management: communication problems, useless meetings, bogus decision process, lack of visibility of who is impacted by a decision, etc. It's true that big projects are more difficult to manage than small ones, but in truth a bad management or bad coworker dynamics can destroy motivation in big or small companies alike. I have worked in a few startups and two indie game companies and all were plagued by mismanagement as much if not more than my other experiences at a bank and at a big cell-phone company. I may have been unlucky, but it may a simple truth about the programmer's job: working with other people is hard and team dynamics is very important.

> No matter whats your job, you dont have a significant contribution on the game. Youre a drop in a glass of water, and as soon as you realize it, your ownership will evaporate in the sun. And without ownership, no motivation.

This is why I left my 'dream job' of work working on a AAA MMORPG. I came on board early on as the first member of a 'NetOps' team, a senior linux systems administrator, which later split off and grew into a number of very large, very specialized teams. My loose definition of 'dream job' at that time was 'large scale' and 'video games'. Cool!

It took a few years for me to redefine what a 'dream job' really meant, and being a drop in a bucket was not it, so I left and moved on (slowly) to freelancing, and haven't looked back.

Late to the party and this doesn't address what the OP said directly, but the state of the video game industry actually makes me quite sad.

The last AAA game I played was Oblivion, which I couldn't finish. I haven't really played a AAA game since, and have only played two video games all the way through since (Braid, and Monument Valley).

When the OP talks about working on a project so big that no one person really "grocks" the whole thing I can relate, but I also want to say "it shows".

IMO, the current state of AAA games is shit. I think the reason they are this way have to do with what the OP is complaining about, the originating vision of the game comes from Marketing not an artist, and no one person has vision for the game. Maybe video games just have too many resources at there disposal.

I think I read somewhere that either Ocarina of Time or Mario 64 had double or triple the playable content of the released game and Miyamoto had a perfectionist eye for the game and was merciless in what made the cut.

Resource constraints are a good thing, IMO, as it forces people to make a razor focused product that trims the fat mercilessly.

Having unlimited resources is the enemy of good decision making, and it shows in the current state of video games (and film too). Games and movies are just too long/full these days.

I do not get why people, it seems, often use the "...or, how I learned to stop worrying and..." in their blog post titles. Are they doing it as an homage to the film Dr. Strangelove (I'm not sure if that was the originator of this alternate/sub title phrase), or, are they doing it because it has become a meme among bloggers?

If the latter, fine, at the worst they are unoriginal. If the former, then they haven't ever seen the movie, or, don't understand the movie and the absurdity of the title character nonetheless of "loving a bomb".

Or, this phrase is common and I erroneously associate its origin with the film.

In every case but the last, it irks me, but for no good reason ultimately.

I don't work in the videogame industry, but I can totally relate. I work in a small website dev studio, and we interact with a lot of companies, both large (though not huge) and small.

As soon as you get people working on a project that are too specialized, no matter the size of the team, you inevitably get conflicting concerns. I think it's very important for managers to understand what those concerns are to be able to take the right decision.

I also think that even specialized people should have some knowledge of other specializations (e.g. designers that understand programming, and vice versa). On very large projects, this is impossible as there are just too many fields, but still I value very much "general knowledge" for that reason.

The problem of companies like Ubisoft is mass market approach. Big publishers prefer commercial mass market art to good art. In result, more interesting games come out from independent studios like inXile, Obsidian, CD Projekt Red and others. Not sure how it looks from insider's standpoint, but from gamer's standpoint, big publishers like Ubisoft and EA are plain boring and their games can be compared to pulp fiction and you don't expect to see masterpieces from them (coincidentally they are also most often plagued by DRM in contrast with games from independent studios).

"On large scale projects, good communication is simply put just impossible. How do you get the right message to the right people? You cant communicate everything to everyone, theres just too much information. There are hundreds of decisions being taken every week. Inevitably, at some point, someone who should have been consulted before making a decision will be forgotten. This creates frustration over time."

This is an issue I've wrestled with over the years - too small a company and your resources are limited, too large and progress mires, and it mires because of communication.

A bit related is when you work in big companies like Apple and Tesla. These guys have a "hero" at the top. There is nothing you can do but wait for that headline that talks about a feature you made and it was Elon Musk's doing or Job's amazing leadership. I have nothing against these two but it is very demotivating to work.

I've worked for a couple of small games studios, and once for a big studio working on a AAA game. The headcount observations resonate.. I remember our teams growing, and growing, and growing, and each extra programmer detracted from the "community" feeling of being part of a studio, and added to the complexity of developing such a large code base with so many devs.

Compare that to small studios, where you can really feel like part of a family. It's very different, and all these kinds of feelings are more intense than other IT companies I've worked at. (Probably partly because of the extra time you tend to spend there when working in the games industry...)

Having said that -- some of my best friends were made when working at the big AAA studio! So it's not all bad.

This was a great read. I worked at a large web agency once and did some pretty decent work. It is definitely rewarding to see people use something that you worked hard on and to see it on tv and in magazines, etc. But that yearn to do your own thing, blaze your own path is a feeling that I'm certain most people who work in creative fields go though.

Sidenote: before he said that the small projects were cancelled, I assumed that they were Evolve (https://evolvegame.com/agegate/) (I don't follow games close enough to know which studio makes which game).

I'm curious as to how he was able to, I assume, bootstrap a game company for a year before releasing an iOS game.

So let me just throw this out there, we will always have to answer to someone. Whether it's our middle manager in a big organization, VCs telling us how fast we need to grow, or our demanding users because they are the only way to get revenue.

All software written at this stage is small cogs on a much bigger platform written by teams of brilliant people over the last 30-40 years.

I do think it's fair to say you want to work on actual interesting problems and being one of 20-40 people working on a game engine is probably very tedious. I imagine long code-review cycles since any tiny change could destabilize the entire system several layers up.

Some people need big organization structure to produce their best work while some people need the freedom to have infinite WFH days answering to users to produce their own best.

> The team spirit was sooo good! Our motto was on est crinqus!, which more or less translates to were so hyped!. During our play sessions, we were so excited we were screaming and shouting all over the place. I think it bothered colleagues working next to us, but hell, we had so much fun. I didnt feel too guilty.

Wow. IMO A dream job is a balance between having fun like you described and working on complex problems. I love how you have written this paragraph.

We are all just cogs. What I learned is no matter what sized cog I am compared to others, just make sure my interaction with the other cogs is as smooth as possible. I take pride in doing good work no matter how small or large.

I used to work beside UBISOFT here in Montreal. I'd here them talk about videogames during lunch time and it was pitiful. It seemed like having colored hair and geek-chic was more important than actually knowing anything about videogames.

I can well imagine this can occur in larger, non-games software development projects. I wonder if it is the same?

I sort of suspect not. I am currently refactoring an (albeit important) part of the LibreOffice codebase - the VCL font subsystem. Mostly it's reading the code (in fact, 90% is reading and understanding the code), but it's kind of satisfying looking at how changes to the code make things better and... more elegant.

Perhaps this is just an Open Source thing. Or maybe I'm unusual in that I like to focus on smaller modules and make them really good, then move on to the next thing.

I had a similar experience starting life with opportunity debt. Single parent family whose mother had no high school education. Have ADD and dyslexia, moved around a lot with a good part my life in subsidized housing, and never graduated from high school. No one teaches you the basics, so that when you do start coming into your own and taking control of your own life you are so incredibly behind your peers socially, politically, and intellectually. I eventually went to community college as a mature student, eventually made my way to university, did a masters and then a PhD at Yale. Through it all I was always one or two steps behind and so many opportunities were missed because I didn't have money. Similarly, now as an entrepreneur I find myself being a little more conservative because you've been through a lot of bad times without a safety net.

When I was in University I didn't understand why some people didn't care about grades and partied so much. When we left school and got into the real world I understood why: they had rich parents with contacts that could get them good jobs or seed capital for their own businesses.

I had lots of ideas and worked in a lot of startups for more than 10 years but now the following phrase from the article describes my situation very well:

"Most of the time, potential founders who share my background tend to work at lucrative jobs in finance or tech until they can take care of everyone in their families before they even dream about taking more risksif they ever get there."

This really resonates with me, I was the first person in my family to go to university, and my grandparents had to work multiple jobs when they migrated from Europe in order to survive. My dad did slightly better, but both my parents only had high school education and worked blue collar jobs.

It does make it really hard to change your mindset when you come from this sort of background, when you've achieved more than anyone in your family and therefore can't really talk to them about your ambitions or career objectives.

It sounds awful, but sometimes I wish I had been born into a different family, with highly educated parents I could have amazing conversations with, who would encourage me to achieve and grow even more.

I find I constantly have a mindset of "I'm not good enough" and it's paralysing. I want to interview for the top tech jobs out there, like Google or Facebook, but my brain keeps telling me I'm not good enough, it's awful.

This was a bit of a tough read for me. My reaction is sort of selfish, but it was very visceral. I read this and had to come back later to respond, though I imagine the conversation's largely over at this point.

My family basically fell apart when I was around 11. My parents divorced. I stayed with my father, siblings went with my mother. My father turned into a drunk. I spent good nights carrying him from the couch to bed, and bad nights carrying him from the lawn, sometimes without clothes. I learned to drive bringing drunks home when I was about 13.

I had no social skills. I struggled in school and failed a grade, though I eventually made it up and graduated high school on time. No one ever even mentioned college to me. I never thought about it until everyone I knew was talking about where they were going. Toward the end of high school my father's alcohol habit turned into a hard drug addiction. About a week after my 18th birthday, we were kicked out of our house because he hadn't paid rent in months. He went to go live with a fellow addict and I became homeless.

I lived on friends couches for a while. Around that time I realized that life could continue getting worse, or I could start fighting the tide. I got a job making pizzas, then doing construction work and then started a sub-contracting company doing construction when I was 19. When I was 22 I had 14 people working for me. I ended up shutting the business down, mostly due to mistakes I had made. After that, I got into tech.

I'm 30 now. I've got a family and don't have much of a relationship with my parents or siblings. I make a solid salary, and have done fairly well in my career, but I struggle with pretty severe imposter syndrome. I have trouble making lasting connections, and have failed entirely to find any mentorship. My wife hardly knows anything about my history, but she knows more than any of my friends.

All of this is a long winded setup to say, I didn't get that transformational experience that the writer here experienced at university. I didn't even know SAT classes existed until well after they would have helped, and had never heard of Stanford until I was into my tech career. I would have given quite a bit to trade my father for an immigrant who simply didn't work. I very much admire the writer's drive and results, and don't mean to detract from any of that, but I have a hard time fighting the urge to point out that he had more privileges than he probably realizes.

First, let me say that I am happy where I ended up. I'm successful, enjoy my work, and when I compare my personal income with our family income when I was growing up, it is an absurd multiple.

We were a very poor family in a poor part of the South. I went to a top-10 small private university on a full ride, felt completely alienated and never quite figured out how to function in that environment. I dropped out and moved to San Francisco at what turned out to be a very good time (early 90's), and once Netscape dropped, discovered nobody else knew what they were doing with this web thing either, and more or less faked it until I made it.

At the same time, I have had and do have ideas that others have executed on, that I know I could have made a go at, if only

The if only list is long, and most of it comes back to self-imposed limitations that I can trace back to how I grew up. Frequently it relates to economic security, but there are other habits of thought that stop me from even getting to worrying about that.

One big one is that I never learned to think about entrepreneurship. A big lesson hammered into me growing up was the importance of "finding a good job", not figuring out how to make my own.

I did start a company in my mid-30s, and we did OK, until we didn't. And that failure (I think) had nothing to do with the habits of thought of a poor kid. But failing in a similar way in my 20s would have left me in a position to learn from that and try again, something I'm unlikely to make a go at 10 years later. I do little things for side income, but those are hobbies.

So it ends up being this thing that doesn't really bother me at this point, but does leave me to wonder what would have happened if I had picked parents from a very different walk of life.

And I am quietly amused when people tell me how they built everything themselves "after a seed from Dad", or "with a great connection I made through a family friend" or similar. Those are impossible blockers for a lot of people, even if they get over some of the habits of mind better than I did.

There's a strong thread of meritocracy in the tech community, but there is no such thing. When you choose the clearly better developer over the other, you're often choosing the one who had better resources growing up, not just natural ability. The poorer developer may have had a natural advantage over the other one, but didn't have the money to develop it as much. So you're really just selecting for wealth all over again.

This is what's behind the achievement gap anxiety: Wise rich people don't want to perpetuate a world where only money selects success. It's wasteful and ultimately unsustainable.

Has anyone done a survey of the family socio-economic status of startup founders and early employees? I'd be curious to see how many founders/early employees are from low-income families, whether their parents graduated college, etc. If not, I'd love to create one.

He conspicuously missed the part where time spent working a job, studying and generally acting like a responsible adult is time not spent networking.

The "poor" kids also tend to find each other at college and over the first few semesters form separate networks from the rich kids. People tend to want to hang out with people who are similar to them. One group goes out partying together, the other sits in a dorm room listening to music and drinking a $15 handle. Their friend groups don't overlap over much.

The poor kids tend to build networks where the members personal skills and resources they bring to the table in the present are important (or that was my observation). I guess when you can't throw money at a problem knowing who's the IT guy and the car guy becomes more important.

I grew up other side of the world, amazing by what I heard in the news - that there existed a world beyond mine where smart people with smart ideas built great companies overnight. I am smart. I have merit. I dropped out at 19, taught my self how to code, and built a 6-figure business with my projects online. I want to learn more.

I got turned down by 15+ companies and startups in the past few weeks because they couldn't sponsor my work visa. This is Canada.

What was that axiom that Red Auerbach was attributed with? "You can't teach height." Ricky demonstrated he was hungry to learn and succeed at a very early age, a quality that will always bring some level of success through life: "I had to bring my dad to the office the next day and told him to pretend to say some words in Mandarin while I just demanded that I get put in an honors-level English class."

How do you identify those who are underprivileged, but carry that quality too? It can be very difficult to identify.

Excellent post. But I feel that we need to go beyond talking about what we can personally do to improve our situation. Either the vast majority of people are ill-adapted for success, or something else is going on. I think we should go beyond the classic argument "If we just all recycle, the world will..." Or "If we all buy electric cars, global warm....".

This post had some of that individualistic attitude to a much broader and obviously systemic problem.

There might be many more success stories if children growing up closer to the poverty line were able to do so in more nourishing environments. However, discouragement, lack of confidence, anxiety are things not restricted to any racial or economic background. Not having a silver spoon is in many ways a better environment in which to be raised.

The OP does not say that his parents didn't show him any love, which is more important for the development of a person than any economic status. Many of the other struggles might be used as fuel for building positive character traits, unless one doesn't let it.

Having read through the post, it doesn't appear that he's actually arrived at a valid point, and is just trying to brand himself as being underprivileged through the telling of his life story, which has turned out to be successful by most standards. He uses the argument that "mindset inequality" gave him a chip on his shoulder so he was able to succeed, and therefore others fail because of it, which seems contradictory.

Paul Graham is my one of my biggest programming heroes. He single-handedly changed the way I think about and do programming about a decade back, and I am eternally grateful for it. One of the biggest lessons I got from him is "succinctness is power". That essay was a game changer both in terms of the math work and the programming work I do.

Here is one instance where that powerful way of thinking runs head on into a stone wall. He said "few successful founders grew up desperately poor" and moved on. Succinct. yes, but not powerful. This piece took a couple thousand words to say the same one succinct thing that PG said, and nails it in terms of the empathy it generates and the power with which it communicates. While PGs writing in this issue comes out as aspie. This is the lesson he needs to take from that latest article and the Internet's reaction, and not that Life is short and totally miss the point.

Narrativity and Authenticity and Poetry and Verbosity is power! (when dealing with humans).

Made me cry. Much of it rang true. Story has similarities except about 1/5th as traumatic and am a white dude who grew up here. Have done well financially but have a compromised home situation traceable to some of the same causes.

This is not the only post recently where the topic is "oh golly gee, look at the hardship I went through to get through college and the found something."

It's a millennial post, and there have been many of them.

Going through college is a challenge ... Having to work or be responsible during such sucks (I interned at Borland as well as worked for an astronomical research company).

Post college, more than a few have to deal with life obligations that come up.

Our profession certainly offers a bit of a cushion and flexibility, but we have to manage that and our obligations.

I don't see someone here whining about having to support their parents due to the last downturn or the many other personal decisions made.

The blog would have been better written as challenges met and overcome and left out the for lack of tact whiny bits...

Yes, coming from poverty has challenges, and friends in such stretched into their late 20s to complete a degree...but perspective and awareness of the wider world is needed... Not another post about personal insecurities..

interesting take. I'd like to hear what PG and others think. Coming from a middle class background, I can relate a bit and see-observationally-the other components to what Ricky's calling "mindset inequality". It's almost like "new money" vs folks that have bigger dollars to spend growing up. I know a lot of friends that have deeply entrenched psychological elements they need to overcome before reaching that "next level" that were engrained because of their upbringing. And, to Ricky's post, it's sometimes more of a challenge than the monetary differentials.

"but building and sustaining a company that is designed to grow fast is especially hard if you grew up desperately poor"

Most people don't have the money or resources to build a company like this, which is why we have VC. They know you are in a desperate situation and exchange the money that you need for a % of the company.

The better thing to do is choose a solid business idea that can be built slowly and at a certain point, put money you make from this venture into an idea that needs more capital to succeed.

An old friend and I were talking a few weeks ago and I smiled when he said "We were so poor growing up we didn't even realize we were poor." And we didn't, we were so poor we couldn't even pay attention. It's was good tho. It's still easy for me to live in a tiny apartment and exist on a steady diet of eggs'n'oatmeal, apples, and frozen chicken bought in bulk.

It is a good thing that privilege is becoming a topic in these circles. It's fascinating to see how many people still try to present it as looking for excuses. Perhaps because they just don't understand what it really is. Or they need to validate their success by convincing everyone that it is only their hard work that matters and nothing else.Also, beware of survival bias. We don't exactly get to hear about the stories without a happy end here.

> Compare that level of confidence to a kid with successful parents whod say something along the lines of If you can believe it, you can achieve it! Now imagine walking into a VC office having to compete with that kid. Hes so convinced that hes going to change the world, and thats going to show in his pitch.

I enjoyed this article a lot but clearly this guy also made some of his own hardships. Going on ski trips just to fit in and then running out of money is incompatible with the image of a frugal poor kid.

Excellent post. We like to think sometimes the underdog wins but sadly, success is typically given to those that were born with it. The unfortunate part to me is the credit they were given as if they were amazing, not born lucky.

> We think this is the reason why poor founders tend not to be successful.

The essay by PG actually meant that there are no poor founders at all. It would be interesting to have statistics on whether poor founders fail more, or don't even get a chance to try at all. I have reasons to believe that the rare poor person is more motivated and determined than the average groomed-to-be middle class entrepreneur, and there are plenty of cases of dirt-poor persons becoming millionaires.

I had lived in China for more than 5 years, Boston, Japan or Korea for more than 9 months each.

In my opinion, minimizing conflict has nothing to do with being poor, and a lot with being Chinese educated.

On the contrary, I volunteer helping poor kids like Spanish gypsies or Subsaharan African and they(and their parents) are ultra confident, and spontaneous. Being open is the default thing for them.

I managed Chinese people in China and there was a world of difference between natives and those Chinese educated overseas.

When living on the US, I was shocked to see parents cheering their kids for the most stupid thing, when in Europe as a kid you are forced to do 4x more effort without rewards at all(like learning multiple languages). It is just what it is expected from you.

In Asia, this pressure over kids is even higher than in Europe.

Family is very important for Chinese almost a religion.This has advantages and disadvantages. For innovation, it is a big disadvantage. Innovation means taking risks, being close to your family means having to convince lots of people those risk are worth it. Most people won't understand you and it is very hard.

In the US, everybody is on their own, basically, nobody gives a dam, which is great for changing the world.

tldr; In spite of motivation, talent and hard work; financial situation and immigration (in my case) play a big role in your entrepreneurship journey.

Excellent article by the writer. Apologies for the long post, however hope it is helpful for someone in similar situation. I can relate to many things that he has faced and feel incredibly lucky to not have faced some things that he had to.

I grew up in a small town in a poor family in India as eldest of four siblings. Our monthly budget was 20 dollars and things were really tight. However my dad worked really hard 16 hours every day and made sure that my studies do not get hindered. He told me every single day that with hard work I can achieve anything that I can dream of.

I got into IIT Bombay (one of the most prestigious colleges in India). However it was obvious to me, that I need to get a decent paying job right after school to support siblings and my dad who couldn't do 16 hours any more.

It took my the next 8 years working for others, to save enough to pay for the studies and marriage of me and my siblings and to help my dad retire.

During these 8 years, I built and ran the biggest social network to come of of India. Apart from this also built something which is now the Twilio of India. I was also the part of the team which built the current mobile offering at LinkedIn.

If I had financial stability, I would have started working on mine own ventures 3 years into my career. But it took 5 more years. As soon as I had financial stability, I quit LinkedIn (with 2.5 years of stock unvested) to start a company.

I started a company, where we had incredible opportunities. We built something like Slack for consumers around the same time as Slack. However, being on H1 visa, I was a minority stake holder in the company. And it is a bad situation to be in, if your traction is not already proven. It made sense to exit the company, so we sold it to Dropbox in an acqui-hire.

Dropbox treated me really well. I met some of the smartest people I have ever met over there and it can be a great place to work for many people. However, I soon realized that it wasn't a good fit for me. Such companies are very top driven, there is little creative freedom, and most of the work is cleaning up the tangled code developed over 7-8 years. So I quit Dropbox after an year.

Now I am in a job that gives me more creative freedom and I am pretty happy on that front. Meanwhile, I have been sole advisor for a few companies over the past 2-3 years and they are all profitable and didn't need to raise any money. The entrepreneur in me, keeps me raring to go and start another company. However, because I am on H1 visa, I do not want build another company with minority stake at formation (USCIS rules). To fix this, I would need to get a Green Card. However if you are from India, it will take you 8-10 years to get a Green Card in EB-2.

So the next steps are either move from US, or find a way to get a Green Card on EB-1. If anyone knows any good immigration lawyers, please help introduce.

However, related to the original post. In spite of motivation, talent and hard work; financial situation and immigration (in my case) play a big role in your entrepreneurship journey.

This is a great article. I'm more of a reader than a co tributor through these articles. I just had to comment on a great, positive post. It makes me want to provide more positive feedback to others to hopefully keep them going.

As someone who grew up in the exceptionally poor, rural South I'm not sure what to take away. I don't know anyone who was able to go to Stanford despite bad grades in high school. That's an enviable luxury.

Props for writing this. Often times I want to tell my (different but similar) story, but never do. I don't know why. It probably has to do with a number of the points you make in the article, so you are a couple steps ahead of me.

Forgive me for being a bit sappy here, but this post, and the discussions that it inspired here are absolute gold!

It's certainly not the first time I've thought about this topic, but for whatever reason, the OP and much of the discussion is resonating very deeply for me (and apparently for a lot of folks). IMHO, this is some of the most productive discussion about privilege and opportunity that's ever appeared on the internet; for the most part, this discussion has avoided the sort of aggravated competition (i.e. pissing contests) and judgements that generally arise out of internet discussions of privilege. In place of those nastier (albeit very human) responses, this thread is full of empathy, support, and offers of help.

I'm very proud of our little community here today.

I'm planning on writing a more detailed post in a few days after collecting my thoughts a bit more, but I'd like to share some half-formed ideas which this post has inspired (comments and criticisms are very welcome!):

1) Part of what's awesome about this discussion is that it seems to have enabled a bit of ad-hoc group therapy. I think it's very helpful for folks who are facing these hurdles to realize they are not alone; while everyone's situation is unique, it's great that people have been acknowledging similarities in their stories, rather than arguing about the differences. We should try to do more of this (with other contentious topics as well)!

2) As several people have suggested, I believe that collecting these stories could potentially help a lot of people. I'm totally down to build and host a site towards that end - would anyone be interested in sharing their stories in that sort of venue?

3) While the specific issues that people have had to deal with are different, there seems to be some common 'flavors' that many have experienced: a) Socio-economic disparity causing an aversion to risk later in life b) Lack of confidence in oneself which adds an additional handicap compared to more self-confident people, likely resulting in missed opportunities (you can't win if you don't play vs you can't lose if you don't play); impostor syndrome. c) Lack of connections, again likely resulting in missed opportunities and increased difficulty in building new things/finding a job/etc. d) Disparity in access to knowledge that greatly improves chances of success (e.g. importance of SAT scores to college admissions; efficient resource management; interview skills)

Improving the situation in (a) seems to be what the world at large is most interested in. Unfortunately, it's a difficult, heavily politicized, and therefore divisive issue. By contrast (b), (c), and (d) seem like problems that we could really improve, at least within our own community.

For example, someone might have a harder time getting the type of (tech) job that they want due to a lack of personal connections (it can be really hard to get your foot in the door), however, it's likely that the personal connections they need are actually visiting this site every day. While we obviously can't just start providing references for total strangers, how much effort would it be to spend a few hours corresponding with someone and vetting their skills to see if you feel comfortable in recommending them? (I'll put my money where my mouth is on this one - if anyone feels like they'd be a good fit at Cloudera, let's talk! EDIT: just to be clear, I don't really have any hiring authority, but I'm happy to talk to anyone, and potentially help with a recommendation)

Likewise, it seems that (b) could be improved for a lot of people with simple communication - impostor syndrome is very common in tech, so I assume that a lot of people here have advice on the subject, or just an empathetic/sympathetic ear.

Regarding (d), this type of information is all likely available already on the internet, but perhaps it could be more usefully compiled for this particular case, minimizing the number of unknown unknowns? What about a thread (like "Who's Hiring") listing offers for mentorship ("Who Needs a Mentor?") ?

I dunno, am I just being overly optimistic here? It seems to me there's a lot of low-hanging fruit here, if some of us are willing to dedicate a bit of time to it.

This article was very real, and I cant help but identify with Ricky, and other stories Ive read on here, but its not just in SV, its entrepreneurship in general, I thought Id share my story as well:

I was born in Albania, a small, poor, European country with a GDP comparable to Zimbabwe, Namibia, or Sudan. That same year marked the fall of it's isolated strain of communism, and Albania's borders were opened for the first time since WW2. In the late 90s, after the collapse of its economy and ponzi schemes, social unrest reached its height following the violent murder of peaceful protesters by the government and police. This sparked an uprising and the government was toppled. The police and national guard deserted, leaving armories open, then looted by militia, and criminal gangs, with factions fighting in the streets to take control. My parents moved our beds to the hallway of our small apartment as there were no windows, and my little sister and I had to stay quiet so no one would hear we were there. After a UN operation, the government was restored, and the situation was relatively calm. Sometime that following summer, my dad found out about a US green card lottery, filled out an application form, and because he was in a hurry, handed it to a random stranger waiting in line to submit it for him. He then forgot about it, until a year later, when we got a letter telling us that we had won. My parent's weren't terribly off in Albania, they were comfortable, their friends, families were there, they had great jobs, and the future looked promising. But having just gone through that rebellion, then the Yugoslav Wars to the north trickling across the border, and the allure of the American Dream, they decided it would be best for my sister and I.

We moved to Philadelphia in 2000, in a working class neighborhood, with a few suitcases and not one word of English. My parents took on multiple jobs, their Albanian communist degrees were obviously not recognized in the US, so my dad, once a doctor, is still working maintenance, and shoveling snow in the East Coast, as I write this. Like Ricky said, and like all immigrant kids, my family depended on me to learn english and deal with translation, and everything in between. 5 years later when we became citizens, and received our passports, my parents knew more about American History than was taught in my inner-city high school.

My parents are incredibly supportive, but they moved to the US in their 40s, they werent familiar with the language, culture, and even more importantly capitalism. Apart from the classic model of education, they werent familiar with the tools required to be successful in such a strange place like this. But with their meager wages they were happy to support my hobbies, buy me lots of books, and a computer with internet access which taught me much more than my inner city schools.

Eventually I got a college degree, then went on to do a dual-masters in design and engineering at the Royal College of Art and Imperial College in London. I even got to go to Tokyo and work for Sony, while studying there. I graduated this past summer, and then launched my final group project as a startup in London with my friends, two English, Oxford educated engineers, and a Spanish designer/engineer whos father is the president of one of the largest companies in the world.

Then reality sank in, I had to leave, I cant be an entrepreneur just yet, and I moved to SV to find a high paying job in tech for the next 5-10 years, so that I can:a. afford to pay rentb. pay off my educational loansc. pay off my parents homed. help my sister pay for her educatione. send some money home because my dad is getting too old to shovel snow

I've become allergic to words like "privilege" as they usually are seen in the company of ill-thought-out and grandiose/insulting/wrong proclamations about How Things Should Be Done,

..but this is none of that - it's an honest look and deep analysis of someone's experience.

And knowing how important upbringing is, and the sheer (almost superhuman) tenacity the author had to go through to even partially overcome the (poisonous? non-optimum?) mindset that was completely a result of things out of their control...

what the heck is everyone else supposed to do? How does society do right by people like this? Overall, we're pretty horrible at dealing with things that are as subtle as mindset.

This story really made me reflect on my own similar past. Growing up poor in US as son of immigrant family and somehow getting into a nationally well known college (public though), I was shocked to see things that I had never known about.

The shock came from seeing how I lacked culture/experience/skills/confidence others had. And these others had grown up in more stable environments with either some or quite a bit of money.

I didn't know how to play any instrument. I wouldn't say everyone I knew in college played an instrument since I wasn't at Standford :) but still it was obvious to me I LACKED the soft skills my peers had.

I had not done many things as a teenager that are possible only when you grow up in a family with some means. And this weakened already not so robust confidence in myself, resulting in a mostly downward spiral as far as confidence in myself.

You see growing up with money buys you a lot of soft skill that helps you later.

I'm not bitter though. It is what it is. I try to be thankful for what I've had so far.

As an Indian immigrant when I see people complaining about Privilege and Inequality in SV (and in America) I feel like laughing.

I lived in a society where everyone was almost the same, similar economic status, similar privilege etc. etc. Life sucked. I decided to move out to be among the top 10% instead of one of the 100%. I eventually ended up in SV.

This place is awesome and the very reason I am here is because I can be in the top 10%. I dont want to be equal but I seek privileged, extra-ordinary wealth and stuff that most others can not afford. I think it is an amazing thing that places like SV exist. If you somehow take out that incentive I think I will move somewhere else. Of course I would be moving out of California sooner or later given the taxes.

[0] - Apart from the safe suburban upper class childhood, the prep school and Harvard education my parents paid for, the job at Goldman Sachs my uncle got me straight out of school, and the finance network from that experience that eventually helped me with my first funding rounds, but yea, besides all that I'm TOTALLY SELF MADE!

I liked to that you went to a community college. I too screwed up in high school. I didn't even know why people were taking another test--the SAT. That said, I cleaned up my act in my senior year, but it was too late.

Everything, and a lot more, that I missed in high school, I made up for in two semesters at community college.

If anyone in high school is reading this, and thinking, "I wish I could do it over?". You can! I had a great time at my community college. I saved a lot of money, and met some really wonderful people. The teachers really seemed to care. I didn't find that at the four year school, or even my professional school.

Just make sure to transfer, and get that four year degree. So many people don't transfer to a four year university, or even get the associate degree. Yes, so much of college is absolute bull shit, but degrees are still valued in a lot of professions. It's changing though, and I couldn't be happier. British companies are taking the lead. I know at Penguin books; HR isn't even allowed to know if you went to college, or not. You are hired on your experience, and maybe a test? The way it should be.

You obviously have no idea what it's like to grow up poor. The fear, the guilt, the frustration, and the exhaustion that you learn almost as if through sheer osmosis from your parents.

The author is not arguing that you literally cannot compete if you're poor. But it's the very mindset from growing up in poverty that, through almost every interaction you have in childhood, leads you to _believe_ that you cannot compete, which prevents you from even trying. And even if you overcome that feeling (through constant hard work and willpower, such as our author's), say you do try to compete with the rich kids, then your lack of inborn confidence is so obviously apparent that you come off as inexperienced, or insincere. This is perfectly accurate in my own experience.

I remember a discussion on a FreeBSD mailing list, around 2003-2004, where people bragged about the impressive (though in comparison to this headline, puny) uptimes of a few years.

One of the developers remarked that while he was proud the system he worked on could deliver such uptimes, having an uptime of, say, three years, on a server, also meant that a) its hardware was kind of dated and b) it had not received kernel updates (and probably no other updates, either) for as long. (Which might be okay, if your system is well tucked away behind a good firewall, but is kind of insane if it is directly reachable from the Internet.)

In fairness, from the article it's not actually clear whether the server literally had an uptime (as reported by the OS) of 18 years, or whether it had simply been in constant service (modulus power cuts) for 18 years.

Having read and enjoyed this thread and the later follow up thread on The Register, I was struck by the number of commenters who could not clearly remember the dates/machine types or who posted anachronistic descriptions.

People here forging ahead with innovative hardware, why not just record brief details of dates and setups in the back of a diary or something. In 30 years time, you'll be able to start threads like this!

I was sad that we had to shut it down, but we had to shut it down due to migrating our primary colo to another city and were going to retire all of the hardware. I'd been manually backporting bind fixes, building my own version, and had to do some config tweaks when Dan Kaminski released his DNS vulns to the world.

It is always a sad day to retire an old server like that, but 18 years... What a winner!

Edit:

But 1158 days for an old dell 1750 running RHEL4 isn't too bad considering it serviced all kinds of external dns requests for the firm. Its secondary didn't have the uptime due to constant power issues in the backup datacenter and incompetent people managing the UPS.

I always had many Unix machines with high uptimes around. My home PC (Linux) typically reboots 2 or 3 times a year. My office DNS server has currently 411 days of uptime and is the best of my bunch ATM.

In 2002 I had installed on the machines under my guard some program that reported uptime to some website. One of my machines, an SGI Indy workstation, had a high uptime, about 2 years. Then a new intern came, and we installed him next to the Indy. Unfortunately, his feet under the desk pulled some cables and unplugged the Indy and broke my hopes of records :)

Great run for all-original equipment. I worked at Shell's Westhollow Research Center in the mid-90s. We handled the nightmare of standardizing the desktop space (for the first time ever).

A lab was decommisioning an instrument controller that had been running non-stop since they had first spun it up, fresh out of tge paking box, a decade previous.

And they had never backed up any of the data. Sure, the solution was the pretty straight forward use of a stack of floppies. It was still pretty nerve-wracking having a bunch of high-powered research scientists watching over my shoulder, "making sure" I got all their research data off the machine they were too smart to ever back up themselves. Good Times.

Anyone running old machinery that had DOS drivers would likely have older computers. I remember working on base seeing 386/486s in an aircraft hanger area that were so covered in grime I was astounded they were still used.

Is there a way to track uptime across kexec[1] restarts? That way you could differentiate between a hard reboot and "soft" one (ex: automated kernel upgrade). Having a system like that working for a 18 years would be insane!

oh man, I have them so beat! I have a Slackware Linux box with similar specs. 200 MHz Pentium, 32MB of RAM, and I think I have an old 10GB barracuda 80pin SCSI drive in it connected to an Adeptec PCI SCSI card. Every so often the hard disk starts making a high pitch noise but throws no errors and the noise goes away after a few minutes. It sits on a UPS and I probably have an uptime of a few years on it now. It has been running nearly 24/7 since 1996! Only powered off when I needed to move the box from a home-office and a few rented offices over the years.

When it was in the basement of my home/office, I would sometimes hear it's disks wine as I was working out (lifting weights and such). It was even in my basement through parties in my early bootstrap years.

I originally bought it to run WinNT 4.0 for a new company a friend of mine and I bootstrapped. I would guess a couple years later is when I put Slackware on it. It's running a 2.0 linux kernel. It's not exposed to the public Internet.

It use to be a local Samba, DHCP, and DNS server for the company. I eventually upgraded to new hardware and left this server around for redundant backups. I develop software so copies of my git repositories find their way onto this box each night. It is in no way relied upon other than to call upon it out of convenience if another server is down or being upgraded, etc...

At one point the box was in the basement of my home when a small amount of water got to the basement floor and because the box sat just high enough on rubber feet, no damage. Occasionally I go back there and pull the cob webs off it.

There is no SSL on it. We still telnet into it or access the SMB shares for nostalgia. It's sort of a joke in the office these days to see how long it will last or if it will simply out last us.

Reminds me of the old Netware servers we used to have running file services and print queues for a few computer labs at a University I worked at. Netware was really stable and we only restarted them when some of the hard disks in the raid array were dying.

This is beautiful to me; it's ROI is off the charts from any kind of reasonable expectation. Keeping it cool certainly helped, and having it serve a role that could even exist for 18 years is another important factor.

I'm curious what Rust says about this. Does Rust have a memory model like C11/C++11? I'm curious whether Rust (and C11/C++11 for that matter) will evolve to have primitives like what the Linux kernel currently defines and uses.

As the HN crowd seems to have quite a lot of Rust supporters, would it be a good selling point in a job description ?

i.e if (it's just currently a personal hypothesis) a company were to consider to (re)write some part of its Rest-ish microservices and that Rust was the chosen language, and was looking for people to help on that, would it make a `interesting++` in your mind ?for real services used by real people by a not-so-startup company, in europe.

edit: I already deployed in production for my previous company some (with a very stricly limited scope) microservices in rust (with everyone approval) and it was quite a success, so I'm more and more thinking that rust is now enough developed to fit the market of language for microservices, as it's more or less "understand HTTP, read/write from redis/postgresql/mysql/memcache, do some transformation in between" and Rust now support these operations quite well

Congratulations! I'm loving Rust, it's my go-to default language now. I'm going to start messing around with piston.rs to make some basic 2d games. I already wrote an irc bot with Rust.

For those wondering, stable just meant the API defined for interactions with some libraries were subject to change. It wasn't like a problem with using the APIs, it was just the developer has to know that new releases might change how they worked or if they would even be available in the future.

I've poked at the language a bit over time and while I don't think I'll ever "get" rust, I can say the folks in #rust on irc.mozilla.org are friendly and helpful, which can't be said for all languages.

I have a question for Rust fans: how do you deal user interaction? Do you have a favorite user interface library? Do you separate the UI from the program and communicate either via IPC or http+html? If you don't care about cross platform capabilities, is there a great library on the platform you do care about?

After trying to understand some Rust, it seems to me that it's just as complicated as C++, from the programmer's perspective. Was I mistaken in thinking that being simpler to program in than C++ was one of the goals?

Is there a good way to look up no_std crates? For crates written with no_std, is there a keyword we should be tagging things with?

I have several crates providing access to GPIO/SPI/I2C under Linux and would like to put together a roadmap for having the interface to these types of devices that is portable to various platforms and devices (e.g. Linux as well as Zinc, ...).

I have looked at both Rust and Go. What I have felt is that, Rust is too restrictive. I mean, you have to fight the compiler a lot harder then you have to do in Go. Some times, for example, if you are writing device drivers or real time embedded programs, then that is great.

But for web services? I think it is overkill. I think Go strikes a nice balance. I would love to be convinced otherwise though. So please tell me, what am I missing?

This is great for pranks: you send a serious looking email to someone, and then they forward it to someone else thinking they sent some chart or whatever but the next receiver instead sees another picture of your choosing

TL;DR: A series of markup and styling hacks that exploit HTML interpretation quirks of various web email services can be hacked to intentionally vary message appearance between services. Coupled with forwarding, which further transforms the email using service-specific quirks, you can make a game where different paths of forwarding across services trigger different appearances.

Fun hack! I feel like there should be some clever practical applications but I'm drawing a blank.

One challenge many services face when sending emails is that you often want to log a user into the account if they've clicked in from an email after all, if they have access to the email account, they can usually reset the password.

But the rub is always the propensity for users to forward on those same emails. If they do, then the second recipient gets control of the first recipient's account, and that's rarely the intention of the first recipient/forwarder.

I haven't had a chance to dive in enough, but I wonder if a technique like this could effectively swap tokenized links with generic links (even if you're just swapping 'display' rules) when a message is forwarded. You might have to use different message style/markup output depending on which service you're sending the message to, but my read of this article is that it's not a ridiculous thought.

Ironically, Lotus Notes Webmail is the only client I have seen so far that actually uses iframes to display HTML emails. If webmails just could embed the HTML content into an iframe with the proper sandbox attributes... nods off and dreams

Make a link per identifiable client, show only the one for the current client, and give each link a post/get parameter identifying the client. Quite easy to do, but a lot of work to have broad client support.

Tada! I now know you read your email on your [obscure and bugged client], which is susceptible to [this and that exploit].

My money is on a wifi SSID that matches the one used by thieves or a heavily-trafficked location the victims all pass through.

My company moved ~5 blocks and it really screwed up the map on my phone (which I use to get around the city) for several months. My company had left the network SSID the same in the old location, so that no one had to re-configure their wifi. Even with GPS on, my phone was always convinced it was in the old location up the block, and this would persist even when I was out on the street, until I walked around a bit.

There are companies (presumably Skyhook is one of them) who drive around mapping SSIDs to physical locations. The problem is that SSIDs can move or be duplicated elsewhere.

The article says of one of the couple "at one point he reset their router, and changed the frequency at which it broadcasts; it didnt solve the problem." It does not say if he changed the SSID.

Theoretically, location is often determined using not just one but several nearby SSIDs, a sort of triangulation. Another possibility here is that there are multiple nearby SSIDs around this home that match the SSIDs surrounding some other area tied to the victims.

I had a similar experience about a month ago, when I thought I lost my iPhone 5s.

Logged onto "Find my iPhone" app and it told me it was about 1km away from my house. I thought I must've dropped it somewhere nearby.

So I got the address from Apple Map, drove there and knocked on the door to greet a rather defensive (obviously) lady who, of course, denied ever picking up an iphone that day.

I snooped around to see if there were any suspicious people around, maybe she has a wayward son who goes around and steals other people's phones.

I then went to the police office nearby and asked them what I could do. They told me they can't use the GPS tracking as an evidence for a search warrant - doh!

It was frustrating because the app was telling me that my phone was right there! At the back of this lady's house!

At this moment, I was going through all sorts of thoughts - such as "should I break into her house at night?", "should I go back and just barge into her house and locate my phone and shout 'AH HA! I KNEW IT! YOU THIEF"!

Feeling dejected, I came home, only to find my phone sitting on the top of my drawer.

> In June, the police came looking for a teenage girl whose parents reported her missing. The police made Lee and Saba sit outside for more than an hour while the police decided whether they should get a warrant to search the house for the girls phone, and presumably, the girl. When Saba asked if he could go back inside to use the bathroom, the police wouldnt let him.>> Your house is a crime scene and you two are persons of interest, the officer said, according to Saba."

The police shouldn't be able to detain someone for over an hour without probably cause and without arresting them.

Now this is interesting. Presumably the coordinates of their house are significant in some way. The result of some kind of truncation perhaps? I can't see how a floating-point error could converge on a specific value like this, but I'm no expert in such things. If they could only post their address the answer would surely be found very quickly, but that would defeat the object somewhat.

If I were them, I would try a non-technical (or at least mitigation) strategy: put a sign in their yard or on their front door that says "Sorry, we don't have your iPhone" and a description of the problem and screengrabs/URLs of articles like this.

-The name of the wireless in a database and picking up the first one, could not be fixable changing the router name or ip address as it is already recorded somewhere -Same for other routers or ip address in the neighbor -Maybe it's not their fault but someone else did it on purpose, e.g. take the cell phone and manipulating it inside a room with stolen/faked/forged data somewhere else and wrapped in a metallic sphere to block signals so only the forged router can be used? -Even more crazier: put a router really close and put there a vpn/proxy?

You know, its not completely out of realm of the possible that these people are lying and really know much more then they are telling police.

I am not implying anything about these people, but I am just saying it isn't impossible.

I lived in Las Vegas for a couple of years and was involved with some people who, from the outside, seemed like very normal folk...in fact, in many ways, I was someone like that, too, due to issues I was fighting at the time.

We all have a different set of experiences in our lives, and, unfortunately for me I suppose, my experiences make me think about this in a different way then many here might.

The problem is likely that the location where these phones are really at is near the WiFi router that used to be at this address. No amount of messing with their router will help with fixing this, since those phones aren't there to begin with. The couple might have better luck hitting up the previous tenants/owners.

Based on this line from the article, I'm almost certain that this is the real answer:

> It started the first month that Christina Lee and Michael Saba started living together.

If this community can't pin point the problem together, then there is something up here. Clearly this warrants Google, Apple, the telcos and someone from an electronic forensic team each putting a part time expert into a team to figure this out for everyones benefit - themselves and this couple.

Of course the reality is that key Google and Apple staff know exactly what has caused this and don't have a ready solution so are keeping quiet.

In any case, if there are Google or Apple employees reading, perhaps you can suggest this idea to someone internally in the chance there may be some progress before someone innocent gets killed for 'stealing' a phone.

There is an obvious and simple solution to this problem; although it's not up to this couple to implement it, but to all the developers building or using geo-location.

This is a problem caused by incorrect data representation. Everything the companies know about the location of the phones is an imprecise area, yet they are representing it with this: [1]

This absurdly precise representation doesn't convey the error margins of the information available, and it's what's convincing people that this couple's home is the point that they're looking for. The mapping companies are misleading their users by hiding the level of confidence about the information provided.

Please all front-end developers, don't use a map pin to represent a place in the map if you don't know their coordinates or exact address. A circle with a radius proportional to the uncertainty area is the best representation in that case.

What's shocking is the lack of a clear explanation by the so-called experts contacted in this article.

My explanation as to what is likely happening:"Every WiFi router has a special unique number - think of it as serial number - baked into the device by the manufacturer (known as a MAC address). Manufacturers request a range of these unique numbers from the IEEE, and are never meant to duplicate them. When you connect to a WiFi router, you connect to its friendly name (SSID), but also your phone receives a part of this special unique number (BSSID address) [1].

Companies like Apple, Google, and SkyHook, record the location of WiFi routers using this unique number. When a phone or other device has a strong GPS location and a strong WiFi signal, they can fairly reliably assume that this unique number is at this specific location.

However, not all manufacturers strictly follow the unique number allocation rule, as getting allocations can be a time consuming process. 999 times out of 1000, reuse of these numbers is not a big issue, and goes undetected. In this case, it is likely that the thieves are using, or are located near, a WiFi router with the same unique number as this couple. Changing this special unique number is sometimes possible on expensive enterprise grade WiFi routers by knowledgeable experts, but not possible or advisable on home routers. The couple should change their WiFi router."

Yes, I have conflated a number of terms there for simplicity. For technical accuracy: WiFi router -> access point.

Here is a way to reproduce this issue and explain my point: if you are a thief, you could setup a GPS spoofer pointing to that house or have had your router in that house in the past so that some phones registered/verified it's MAC address to be at the house location. Now assume the thieves live in a location where they took this router with them and where there is no GPS signal or other router or cell signal, but only the thieves' router turned on. Now as soon as the thieves connect the stolen phones to their router, they will report being at the house.

My bet is that this is likely an intentional attack by the thieves and that they are aware of what they are doing. There is a small chance they could have been people living in the house before or drove by to setup their spoof as it would have been much easier than getting their hands on a GPS spoofer.

This is just a hunch, but does anyone know whether there is any connection / shared services between the phone finding system and the iMessage airline flight tracking system?

It may just all be coincidence, but that flight tracking feature is so wonky and jacked, giving false locations, legs, flights, and information on the regular. I am surprised it hasn't caused a massive outcry for just how horrible it is. It kind of makes me wonder whether there is some shared service or database or something because the flight lookup feature just smells of the same kind of failure.

I realize, most people don't know/recall that iMessage will auto-link flight numbers. Just message the full flight number.

Given that people coming to them have their info from somewhere, there might be chance they could succeed through asking those visitors where exactly do they have the data from, and then trying to contact/file the complaint to this specific service/company/...

I didn't finish the entire article, but my immediate thought was that this has something to do with these new phone drop spots that I've seen at grocery stores.

You apparently can just put a phone in one of these ATM-like machines and get money out, which immediately struck me as a clever way to buy stolen phones on the cheap from criminals, with indemnity... which would definitely lead to situations like this when those stolen phones are resold to unsuspecting consumers.

Based on my experience with a few phones I own, there's a few things that could be happening here:

It was mentioned the "SSID"/MAC address problem. It's possible that they have a home router with its default SSID and are encountering a MAC address collision (assuming MAC address is always taken into account, which I'm not sure that it is). Their router is likely part of some database that the GPS uses when the phones enter an area with WiFi but no cellular service or line of sight to the satellites. I had a similar failure every time I went indoors to an archery facility I visited weekly for three months. Both my wife's and my phone would think we were a clear 30 miles away in another city the second we got far enough into the building to lose cellular service. I dug into it and discovered it was using WiFi APs to get location. I think the archery place has another location in that other spot, so it's possible they swapped WiFi gear at some point, but it's anyone's guess.

Another possibility, hinted at in the article, is that there's no other location data available to the stolen phone (no mapped WiFi, no cellular service) but it has an IP address so the devices are falling back to Geo IP which is extremely inaccurate (my IP address changed recently and I am now a Canadian according to location services on my PCs with no GPS capabilities -- 200 miles off). It could be a circumstance of "that IP isn't known, but that block is owned by x ISP and here's a general location of where that is ... only the little dot happens to land on their house.

It would be really smart for apps that track location for theft purposes to keep a reasonable history. If it's a mobile phone, the last known high-accuracy reading from the GPS should be presented along with lower accuracy results to help in situations like this. I'd imagine it wouldn't be terribly difficult to correlate several readings over a period of time and discard ones that are clearly not sane (as would have been the case with my phone in the archery place). A bonus would be to perform other actions when the device is marked "stolen", like take photos at certain intervals and upload them to the cloud to make it easier to "prove" your phone is in the hands of someone it shouldn't be (one of the tools I had did something like this).

I had something similar happen when I was in college visiting a friend and the police show up asking who dialed 911, if she was alone, etc. This was before ssid based geolocation become popular. I had to spend some time explaining how inaccurate cell tower positioning is, most people just assume that if the cops say it came from inside the house it must have.

Ignoring everything else - if it the phone that went missing, should the location at least be accurate to nearest base station. In case of missing child operator would provide triangulation results, right?

Anyway, we use as much ES6 as Node 4 allows at work. Transpiling on the server never made much sense to me. I also used to sprinkle the fat-arrow syntax everywhere just because it looked nicer than anonymous functions, until I realized it prevented V8 from doing optimization, so I went back to function until that's sorted out (I don't like writing code that refers to `this` and never require binding, so while the syntax of => is concise, it is rarely used as a Function.bind replacement). Pretty much went through the same experience with template strings. Generator functions are great.

I'm not a fan of the class keyword either, but to each their own. I think it obscures understanding of modules and prototypes just so that ex-Class-based OOP programmers can feel comfortable in JS, and I fear the quagmire of excessive inheritance and class extension that will follow with their code.

One minor quibble. I was bothered by the misuse of the words "lexical" and "interpolate". The lexical value of the keyword "this " is the string "this". Then, you might translate between two technologies such as CommonJS and ES6 but interpolating between them implies filling in missing data by averaging known values. Granted this word is commonly abused. Sorry this is a bit pedantic but these corrections would improve the document, IMO.

The only thing from this list of new ES6 idioms that doesn't sit comfortably with me is the short-hand for creating classes. I remember being kind of blown away way back in the day with the prototypical/functional nature of Javascript and how you could wrangle something into being that behaved in an object-oriented manner just like other languages that had explicit class declaration and object instantiation.

Part of me feels that obscuring Javascript's roots in this respect is very un-Javascript-y. What think ye?

Coming from Ruby, loving template literals, feel right at home with them, I wish even C could have them (if that makes any sense!).

"Require" is the reason why we now have a module for just about anything in Node.JS. I even think Kevin Dangoor or whoever invented it should get the Nobel prize. But then the ES committee choose to use paradigms from year 1970. I cry every time someone use import instead of require in JS because they miss out why globals are bad, modularity is good, and the JS API philosophy (super simple objects with (prototype) methods).

I'm all in on ES6 when it's practical or allowed. Arrow functions are wonderful, I love destructuring assignment, const and let, and considering that some projects I work on involve a lot of async stuff, I'm close to just giving in and using ES7's async/await functionality.

But most of the time this is in the context of Node.js development, and in every case I use Babel.js to turn the end result into ES5 code.

I'm perfectly comfortable with using ES5, because as a freelancer/contractor I often have to do so. But I really miss the ES6 stuff and the more I use it, the more time it takes me to 'switch' to a mindset where I'm only allowed to use ES5 functionality.

Nonetheless, it strikes me as really odd to actively prefer ES5. Having worked with Ruby and Python (among others), ES5 feels limiting for no good reason. The only rationale I can think of for prefering ES5 is nostalgia.

Could you elaborate why you don't like the 'perl/python' style changes? Because I truly do not understand why one would choose to limit oneself to things like .bind(this) instead of the different forms of arrow functions that make functional-like programming so much easier. And I've found that the best part of JS is that it's decently functional.

Edit: I would agree when it comes to the new 'class' keyword though. I'm not a fan of that.

This will likely get downvoted - but I have just realized how much I was underestimating the privilege of developing apps in Dart instead of JavaScript. Dart had none of the mentioned idiosyncrasies from day one, all the features, and has a lot of other stuff (like async/await, yield, mixins, etc) to offer. Its tooling is very simple and powerful, and the overall experience is really nice - when there is a problem, it's always in the logic of my code, and not things like some weird implicit conversions that are so common in JS land. I almost forgot how terrible JS is...

Just a few days ago I spent $200 to purchase a multi-domain wildcard certificate so that I could host multiple secure domains, with multiple subdomains, on a single elastic beanstalk app. It was such a headache to figure out that I needed the multi-domain wildcard cert, then to find one to purchase for a reasonable price.

Now, 5 days later, AWS lets me create one for free in 3 minutes, with zero hassles. I cannot select it in beanstalk yet, but I am sure that will come. I am consistently amazed by how frequently AWS satisfies needs I barely knew (or didn't know) I had.

Can anyone think of any advantages LetsEncrypt would provide over this offering from AWS? Or does this basically kill LetsEncrypt's usage on AWS?

The only thing I can think of is that AWS Certificate Manager only validates by email addresses which can be problematic if you don't have MX records or don't have control over it(Maybe a large organization where the people who do control those email addresses won't click simple verification links)

It seems a bit inconsistent as to when it will use the email on the whois record for the validation too. For some subdomains I try it will allow validation using the whois address, other times it's just the common aliases@sub.domain.com(which requires an mx record)So I guess if you're nesting deeper than one subdomain(e.g. abc.def.example.com) then maybe it'd be easier to get letsencrypt set up than try to get mx records for abc.def.example.com.

Shameless Plug/Disclaimer: I had been working on a tool to make it dead simple to use Lets-Encrypt certificates for CloudFront/ELBs and handled autorenewal via Lambda. I'm not sure there is any use for this now that this exists though.

Wow that was super easy. I tried this on one of my sites and it really took me like 2 minutes total to add SSL to it.

The only confusing part was that port 443 was blocked in ELB by default (which made it look like it didn't work but got fixed easily as soon as I figured it out). I've never seen an easier way to do this till date.

Buying SSL Cert through Bluehost (my domain registered, and blog hosted) and figuring out how to apply it my web-app, zejoop.com, hosted on AWS was far and away the most annoying and difficult chore in my development/deployment process as a relatively junior SW developer. If I could solve all inhouse within AWS (at reasonable cost) is be very happy. My cert just renewed, so until I roll change to AWS my https:// is down. If update is as difficult for me as original install was, then I guess it will be about 18 hours of aggravation. So I'll look into this, if the OP title is a reality.

How will Amazon's new root certificate be spread to all browsers and mobile devices, so it's made sure that it will be trusted on every possible endpoint? Is the root certificate cross signed with another, already trusted cert?

There is a third way which is put the company into hibernation. I was faced with this with my startup a bit more than 10 years ago now. I ran out of runway so I laid everyone off, paid the bills, and got a job. I then bought out everyone else, worked part time on the business and built it back up over then next few years to the point where I could return full time. I could have started a new business, but I believed that there was a lot of value in the old business [1] which proved to be correct.

1. Some caveats here. Firstly, I did not have many people to buy out and they were willing to sell at a reasonable price. Secondly, my business is in biotech/bioinformatics and we had put a lot of resources into R&D. This R&D had real value that could be used to bring the business back to life.

I think this article needs to be paired with an article about "When to shut your company down." If I recall, that article exists and it basically says: when you lose hope.

Maybe I am not looking at this right but this part doesn't make sense to me:

In many cases, <2 months is the point of no return. If you are in this state it is immediately necessary to lay off your employees and give them severance, pay down your obligations, and use your remaining cash for shutdown costs.

So is that for companies that had a year+ of runway at some point and are now down to 2 months? What about companies that never had 1 year of runway? The differences between those are pretty big.

For example if you have a 4 person startup and 2 months of runway after being on the market for only 4-6 months, you are supposed to just shut it down?

No, you take consulting jobs and do side work till you can get higher revenue or some financing.

I think, like most startup articles, this applies to companies who have already gotten past seed stage, initial traction and thus is not applicable for 90% of us.

My friends, we are finally hitting the new economy where even startup are being asked to make money---maybe not to the point of profitability, but even a little revenue can make a big difference in a lean organization.

I've never understood the psychology of those that do not fundamentally get this. If you just finished raising a seed or angel round, chances are you had less than 12 months of runway to begin with. Perhaps your personal savings was drying up or you were running out of friends and family resources that could help you run this out further. The sense of urgency and anxiety you felt while raising your seed round doesn't go away simply because you were able to raise some money. If anything, it would increase. So the fact that someone had to specifically cover this in a blog post seems really counterintuitive to me.

Except, sometimes it doesn't? If you look at the notes[0] at the bottom of The Fatal Pinch:

>There are a handful of companies that can't reasonably expect to make money for the first year or two, because what they're building takes so long. For these companies substitute "progress" for "revenue growth." You're not one of these companies unless your initial investors agreed in advance that you were. And frankly even these companies wish they weren't, because the illiquidity of "progress" puts them at the mercy of investors.

What do you do if you're one of those companies? There's plenty of business models that could be attractive acquisition targets (read: billions), but otherwise can't monetize to save their souls.

Two pieces of advice often encountered (paraphrasing):

"Treat each funding round as if it's your last."

"VC money is like rocket fuel. It's intended to be burned at a high rate."

I don't wish to be overly mean or uncharitable, but I don't really think anyone who is unable to figure out the advice offered in the section titled "Some tips on reducing burn" all by themselves is ever going to be able to run a successful business.

It's discouraging when new employees expect a certain lifestyle on joining your startup but you're runway is less than a year. Startups have been portrayed as having so many perks that there's an impossibly high standard to strive toward.

Bryan may certainly be right (I neither know him nor much about unikernels), but some parts of his argument seem incredibly weak.

The primary reason to implement functionality in the operating system kernel is for performance...

OK, this seems like a promising start. Proponents say that unikernels offer better performance, and presumably he's going to demonstrate that in practice they have not yet managed to do so, and offer evidence that indicates they never will.

But its not worth dwelling on performance too much; lets just say that the performance arguments to be made in favor of unikernels have some well-grounded counter-arguments and move on.

"Let's just say"? You start by saying the that the "primary reason" for unikernels is performance, and finish the same paragraph with "its not worth dwelling on performance"? And this is because there are "well-grounded counter-arguments" that they cannot perform well?

No, either they are faster, or they are not. If someone has benchmarks showing they are faster, then I don't care about your counter-argument, because it must be wrong. If you believe there are no benchmarks showing unikernels to be faster, then make a falsifiable claim rather than claiming we should "move on".

Are they faster? I don't know, but there are papers out there with titles like "A Performance Evaluation of Unikernels" with conclusions like "OSv significantly exceeded the performance of Linux in every category" and "[Mirage OS's] DNS server was significantly higher than both Linux and OSv". http://media.taricorp.net/performance-evaluation-unikernels....

I would find the argument against unikernels to be more convincing if it addressed the benchmarks that do exist (even if they are flawed) rather than claiming that there is no need for benchmarks because theory precludes positive results.

Edit: I don't mean to be too harsh here. I'm bothered by the style of argument, but the article can still valuable even if just as expert opinion. Writing is hard, finding flaws is easy, and having an article to focus the discussion is better than not having an article at all.

Bryan Cantrill seems to have some personal interest in denigrating OS research (defined as virtually everything post-Unix) as all being part of a misguided "anti-Unix Dark Ages of Operating Systems". He has expressed this sentiment multiple times before, and places a great deal of faith on Unix being a timeless edifice which needs only renovation. Naturally, he regards DTrace and MDB to be the pinnacles of OS design in the past 20 years and never stops yapping on about them, this article being no exception. It's his thought-terminating cliche.

He voiced all this here [1], and so I countered by listing stuck paradigms in traditional monolithic Unixes, as well as reopening my inquiry on Sun's Spring research system, which he seems to scoff at, but over which I am impressed by the academic research it yielded. He has yet to respond to my challenge.

Big upvotes for this article. I'm glad it was written, because I've seen nothing but hype for Unikernels on Hacker News (and in ACM, etc.) for the last 2 years. It's great to see the other side of the story.

The biggest problem with Unikernels like Mirage is the single language constraint (mentioned in the article). I actually love OCaml, but it's only suitable for very specific things... e.g. I need to run linear algebra in production. I'm not going to rewrite everything in OCaml. That's a nonstarter.

An I entirely agree with the point that Unikernel simplicity is mostly a result of their immaturity. A kernel like seL4 is also simple, because like unikernels, it doesn't have that many features.

If you want secure foundations, something like seL4 might be better to start from than Unikernels. We should be looking at the fundamental architectural characteristics, which I think this post does a great job on.

It seems to me that unikernels are fundamentally MORE complex than containers with the Linux kernel. Because you can't run Xen by itself -- you run Xen along with Linux for its drivers.

The only thing I disagree with in the article is debugging vs. restarting. In the old model, where you have a sys admin per box, yes you might want to log in and manually tweak things. In big distributed systems, code should be designed to be restarted (i.e. prefer statelessness). That is your first line of defense, and a very effective one.

Well that is pretty provocative :-) Bryan might be surprised to learn that for its first 15 years of its existence NetApp filers were Unikernels in production. And they out performed NFS servers hosted on OSes quite handily throughout that entire time :-).

The trick though is they did only one thing (network attached storage) and they did it very well. That same technique works well for a variety of network protocols (DNS, SMTP, Etc.). But you can do that badly too. We had an orientation session at NetApp for new employees which helped them understand the difference between a computer and an appliance, the latter had a computer inside of it but wasn't progammable.

I'm pretty sure you debug an Erlang-on-Xen node in the same way you debug a regular Erlang node. You use the (excellent) Erlang tooling to connect to it, and interrogate it/trace it/profile it/observe it/etc. The Erlang runtime is an OS, in every sense of the word; running Erlang on Linux is truly just redundant, since you've already got all the OS you need. That's what justifies making an Erlang app a unikernel.

But that's an argument coming from the perspective of someone tasked with maintaining persistent long-running instances. When you're in that sort of situation, you need the sort of things an OS provides. And that's actually rather rare.

The true "good fit" use-case of Unikernels is in immutable infrastructure. You don't debug a unikernel, mostly; you just kill and replace it (you "let it crash", in Erlang terms.) Unikernels are a formalization of the (already prevalent) use-case where you launch some ephemeral VMs or containers as a static, mostly-internally-stateless "release slug" of your application tier, and then roll out an upgrade by starting up new "slugs" and terminating old ones. You can't really "debug" those (except via instrumentation compiled into your app, ala NewRelic.) They're black boxes. A unikernel just statically links the whole black box together.

Keep in mind, "debugging" is two things: development-time debugging and production-time debugging. It's only the latter that unikernels are fundamentally bad at. For dev-time debugging, both MirageOS and Erlang-on-Xen come with ways to compile your app as an OS process rather than as a VM image. When you are trying to integration-test your app, you integration-test the process version of it. When you're trying to smoke-test your app, you can still use the process versionor you can launch (an instrumented copy of) the VM image. Either way, it's no harder than dev-time debugging of a regular non-unikernel app.

It may well be the case that unikernels as currently envisioned by unikernel proponents are impossible to make fit for production; it may also well be the case that there exists a product that is closer to a unikernel than current kernels, that is quite production-suitable, and unikernels are fruitful research to that point.

For instance, you could imagine a unikernel that did support fork() and preemptive multitasking, but took advantage of the fact that every process trusts every other one (no privilege boundaries) to avoid the overhead of a context switch. Scheduling one process over another would be no more expensive than jumping from one green (userspace) thread to another on regular OSes, which would be a huge change compared to current OSes, but isn't quite a unikernel, at least under the provided definition.

Along similar lines, I could imagine a lightweight strace that has basically the overhead of something like LD_PRELOAD (i.e., much lower overhead than traditional strace, which has to stop the process, schedule the tracer, and copy memory from the tracee to the tracer, all of which is slow if you care about process isolation). And as soon as you add lightweight processes, you get tcpdump and netstat and all that other fun stuff.

On another note, I'm curious if hypervisors are inherently easier to secure (not currently more secure in practice) than kernels. It certainly seems like your empirical intuition of the kernel's attack surface is going to be different if you spend your time worrying about deploying Linux (like most people in this discussion) vs. deploying Solaris (like the author).

It comes off as a slew of strawmen arguments ... for example the idea that unikernels are defined as applications that run in "ring 0" of the microprocessor... and that the primary reason is for performance...

All of the unikernel implementations he mentioned (mirageos, osv, rumpkernels) all run on top of some other hardware abstraction (xen, posix, etc) with perhaps the exception of a "bmk" rumpkernel.

We currently have a situation in "the cloud" where we have applications running on top of a hardware abstraction layer (a monolithic kernel) running on top of another hardware abstraction layer (a hypervisor). Unikernels provide a (currently niche) solution for eliminating some of the 1e6+ lines of monolithic kernel code that individual applications don't need and introduce performance and security problems. To dismiss this is as "unfit for production" is somewhat specious.

I wonder if Joyent might have a vested interest in spreading FUD around unikernels and their usefulness.

I think the problems with this article are well covered already. Just a suggestion for Joyent: articles like this are damaging to your excellent reputation, would suggest a thin layer of review before hitting the post button!

Some additional meat:

- The complaint about Mirage being written in OCaml is nonsense, it's trivial to create bindings to other languages, and in 40 years this never stopped us interfacing our e.g. Python with C.

- A highly expressive type/memory safe language is not "security through obscurity", an SSL stack written in such a language is infinitely less likely to suffer from some of the worst kinds of bugs in recent memory (Heartbleed comes to mind)

- Removing layers of junk is already a great idea, whether or not MirageOS or Rump represent good attempts at that. It's worth remembering that SMM, EFI and microcode still exist on every motherboard, using some battle-tested middleware like Linux doesn't get you away from this.

- Can't comment on the vague performance counterarguments in general, but reducing accept() from a microseconds affair to a function call is a difficult benefit to refute in modern networking software.

I think Bryan Cantrill and Joyent are doing a number of interesting things, but this reads more like an ad than a genuine critique of Unikernels.

The primary reason to implement functionality in the operating system kernel is for performance: by avoiding a context switch across the user-kernel boundary, operations that rely upon transit across that boundary can be made faster.

I haven't heard this argument made once. There are performance benefits (smaller footprint, compiler optimization across system call boundaries, etc...). However, the primary benefit is not performance from eliminating the user/kernel boundary.

Should you have apps that can be unikernel-borne, you arrive at the most profound reason that unikernels are unfit for production and the reason that (to me, anyway) strikes unikernels through the heart when it comes to deploying anything real in production: Unikernels are entirely undebuggable.

If this were true, and an issue, FPGAs would also be completely unusable in production.

The essential point the lengthy article makes revolves around debugging facilities for unikernels.While mostly true for MirageOS and the rest of the unikernel world today, OSv showed that it is quite possible to provide good instrumentation tooling for unikernels.

The smaller point about porting application (whether targetting unikernels that are specific to a language runtime or more generic ones like OSv and rumpkernels) is the most salient, it will probably restrict unikernel adoption.

For docker, if only to provide a good subtrate for providing dev environments for people running windows or Mac computers, it is very promising.

I'm happy for this article because it does hit some points on the head. Other points are deeply entrenched in Bryan's biases, but I can't really fault him for that.

In particular, I am suspicious of the idea that unikernels are more secure. Linux containers make the application secure in several ways that neither unikernels nor hypervisors can really protect from. Point being a unikernel (as defined) can do anything it wishes to on the hardware. There is no principle of least-privilege. There are no unprivileged users unless you write them into the code. It's the same reason why containers are more secure than VMs.

Users are only now, and slowly, starting to understand the idea that containers can be more secure than a VM. False perspectives and promises of unikernel security only conflate this issue.

That said, I do think the problems with unikernels might eventually go away as they evolve. Libraries such as Capsicum could help, for instance. Language-specific or unikernel-as-a-vm might help. Frameworks to build secure unikernels will help. Whatever the case, the problems we have today are not solved or ready for protection -- yet.

This blog post was clearly spurred by the acquisition made by Docker (of which I am alumnus). I think it's a good move for them to be ahead of the technology, despite the immediate limitations of the approach.

First, let's put aside the start of the blog post, which consists entirely of empirical questions. Each potential adopter of unikernels will have to figure out for themselves wether their specific use-case justifies the cost and benefit of this particular technology, just like all others.

Putting that aside, debuggability is an obvious and pressing issue to production use-cases. Any proponent of unikernels that denies that should be defenestrated. I haven't come across any that do.

How to go about debugging unikernels is unclear because it certainly is still early days. However, I don't think the lack of a command-line in principle precludes debuggability, nor does it my mind even preclude using some of the traditional tools that people use today. For example, I could imagine a unikernel library that you could link against that would allow for remote dtrace sessions. Once you have that, you can start rebuilding your toolchain.

From TFA: "At best, unikernels amount to security theater, and at worst, a security nightmare."

As a security engineer, that's a good one sentence summary from my point of view of unikernels, since, forever.

I think the reason why unikernels are being developed is due mostly to ignorance, and if any of them is successful, it will morph into an OS that is closer to Mesos, Singularity, or even Plan9. That's faster, safer, more logical, etc.

It's not by any means the main point of the article, but: I'm not sure citing the Rust mailing list post on M:N scheduling is proof that it's a dead idea. The popularity of Go is a huge counterexample.

I'm not likely to run a unikernel anytime soon, but I wanted to respond to this:

> And as shaky as they may be, these arguments are further undermined by the fact that unikernels very much rely on hardware virtualization to achieve any multi-tenancy whatsoever.

Multi-tenancy is needed in some cases, but I don't need it, we use the whole machine, and other than the one process that does all the work, we only have some related processes for async gethost, monitoring/system stats processes, ntpd, sshd, getty.

One of the things that seems to really fall flat is the claim claim that the security is bad for unikernels. The comparison point though is not a traditional OS running in a hypervisor but a container running on the host OS. In that comparison I think unikernels are emphatically more secure than what you get on Linux, and have essentially all of the same advantages of containers (plus a few extra ones).

For Joyent of course they have a book to talk up and they want to sell you their own solution which looks more like containers than a hypervisor. The Joyent solution is I think undoubtedly very interesting and well-considered but I have a suspicion that they've hitched their wagon to the wrong horse and Linux will keep winning.

For a long time the dominant programming environment for IBM mainframes has been VM/CMS, where VM is something like VirtualBox and CMS is something a lot like the old MS-DOS, i.e. a single process operating system. Say what you like but it was a better environment than anything based on micros until you started seeing the more advanced IDEs on DOS circa 1987 or so.

Now the 360 was a machine designed to do everything, but it's clear the virtual memory in most machines is an issue in terms of die size, cost, power consumption and performance and I wonder if some different configuration in that department together with a new approach to the OS could make a difference.

Joyent doesn't sell unikernel services, hence unikernels are bad. Color me shocked. Is it me, or has Joyent become less than upfront about their motives over the last few years? I don't require everyone to embrace "don't be evil" or whatever, but I always get a "righteous" vibe from Joyent employees that seems at odds with their actual behavior. Maybe they feel under siege or whatever, and are reacting to that? The whole thing is vaguely off somehow.

Isn't it a feature of (some) unikernels, that you can fire one up to respond to some request, and tear it down, in milliseconds? If so, running an AWS Lambda-like service with all the isolation you get in a HVM seems desirable for some situations. The isolation provided by a Docker container might not be good enough. It's a feature whose benefits, for some applications, might balance the debugging costs the article outlines.

I think he brushes by the security argument too quickly. Unikernels are (typically) smaller with less attack surface and more importantly it's easier to reason about them. I'd argue that this ability to keep more of the entire OS in your head at any given time improves security on a high level of abstraction.

Reading through the article I feel like the author and I are describing different things when we use the term unikernel, which is surprising because we both have experience with the same unikernel: QNX. I'm not very familiar with the other examples, but my QNX application definitely does have processes that I can see using top, htop, etc., and interfaces with system hardware using the QNX system calls; all things the article describes as not being features of unikernels.

Either the article is written in the context of writing kernel software, which wouldn't have much of an impact on my decision to run my application on a unikernel OS or not, or QNX is a far outlier from other unikernel OS's and that's why I'm so confused.

I hadn't any experience with unikernels (still student), but there are few concerning things about them. And the main thing is that those things that are concerning are at it's core.

I have only respect and admiration for Mr. Cantrill, but this post felt kinda strange. After reading the last paragraph it sounded like and ad. Maybe they got scared of Docker possibly expanding and taking part of their cookie. I don't know, but these discussions were interesting to read at least...

I tweeted to him to research IBM's zTPF before writing this, I guess it conflicts with the narrative he's telling though. In general, I agree with his sentiments, but there are no absolutes, only trade offs here. You can, for instance, hook a debugger into the kernel or through the hypervisor. And debugging hardware looks a lot like debugging a unikernel in that sense.

The main use case for unikernel apps (the way I see it) is running language specific VMs like Beam, MRI or the JVM almost directly on bare metal and getting rid of all the complexity of OSes. The idea is to make it easier to debug, optimize and tune applications by removing traditional OSes complex kernels from the equation. The real argument for security (that the author omits) is derived from that: 20 million less lines of code in the stack that you deploy.

>"There are no processes, so of course there is no ps, no htop, no strace but there is also no netstat, no tcpdump, no ping! And these are just the crude, decades-old tools."

So does this mean something like a Symbolics machine or an Oberon machine can't be debugged, or does this mean that the unikernel has to be debugged at a higher level by the application(s) it's dedicated to?

TL;DR for those reacting to the title, but not reading the entire article:

Unikernels are young, and lack tooling/robustness that we have in more traditional approaches. They are not production ready yet, but will likely become a prominent way of building and deploying applications in the future.

More important to me is the fact that SpaceX is streaming its different tries in _live_, taking the risk of crashing the rocket out in the open. How many vehicles did BO lose before achieving a vertical landing ?

Oh, and what about the fact that they have total control on the location and time of the launch ? Meaning they basically control weather to an accuracy no one launching anything useful into space has. For example, last failed SpaceX landing was officially linked to fog icing the leg locks. That's not going to happen if you launch on a clear day from the desert.

These are more comparable to the Grasshoper tries than to anything SpaceX has done recently : no horizontal speed, full weather control, no reporting on failed attempts, very limited weight. Even the last grasshoper video seemed to have more side winds that had to be countered than this 100k altitude video.

Even the format of the video itself screams "vaporware" to me. It looks like a trailer for a bad action movie, where some spacey something goes to space, separates and lands back in 15 seconds. When the grasshoper videos left me in awe, looping over them 5 times in a row, the BO ones just make me feel like they sh/could end with some sexual innuendo over their big rocket

Honestly just roll my eyes now at these pissing contest blogposts from Bezos. He does his team a disservice by suggesting that what they are achieving is actually more advanced than what SpaceX has done - it all looks like the approach the Soviets took in trumpeting various "firsts" in space in the 60s as the US methodically built capability far beyond what the Russians could sustain.

I am impressed by both companies' ambition, and SpaceX clearly has both the time and money advantage over Blue Origin. Let your accomplishments speak for themselves.

This is amazing, and a pretty amazing feat that we are taking for granted. Space is super super tough, the complex coordination of manufacturing something like this is being totally written off by many, but I assure you it is non trivial.

A popular sentiment in that industry is that rocketry is like writing software composed of many modules and testing each module separately on mac, then deploying the entire build on linux. If it doesn't work, you don't just back out the conversion error or stray quotes you left in, your rocket explodes.

The engineering spend alone is massive, as is the damage to the company when a failure is syndicated across youtube. Taking big risks is something we should be promoting.

We are in a technological renaissance and it starts with lowering launch costs to achieve realtime LEO satellite blanketing and distributed communication channels to connect to the other fucking 3 billion people without internet. Bezos is accomplishing something great, and we don't need to qualify that statement.

He and Musk are definitively the Jobs and Gates of the 21st century if you want to use the obvious cliche.

What Gates did. What Jobs accomplished. It was pretty fucking powerful. Musk and Bezos are sort of doing that, except both are working in at least 3 industries at that same scale.

I wish Blue Origin, Sierra Nevada, Firefly and all the other people in new space well. Nano-sats will provide realtime insight to the earth, people will be able to own a satellite in ~5-10 years because of these advancements.

this is good for all of us, and the only negative thing to say about it is that for god sakes Jeff, that rocket does look a bit like a stubby penis.

I really don't see why these companies are competing. They are in totally different markets. Sure, there is some technological crossover in that they both use rockets, but this is like comparing a prius to a locomotive.

It sounds like Blue Origin rockets are only capable of sending payloads to space for just an instant, before gravity pulls them back down to earth. They're nowhere near close to capable of putting anything into orbit.

I think its great that New Shepard is coming along, I don't get how Bezos feels he is helping his cause when he says "people living and working in space." when he doesn't come close reaching orbit. The difference between an orbital mission and a sounding rocket.

Now, that he is getting closer to having tourist flights outside the atmosphere than Virgin Galactic? That is pretty cool and a fair comparison. Being able to out execute Burt Rutan? That counts for a lot, but don't try to compare yourself to SpaceX until you're putting things into LEO and getting back the hardware to use again.

Here is an animated video that shows what space tourism will be like. You will be in space for a few minutes. the view of the world from space will be amazing. plus you will be weightless. not sure how long you will be up there or the cost but it looks awesome.

Is there a significance for ~100km? Is this, roughly speaking, space -- where the atmosphere is so thin to be almost negligible? Clearly atmosphere thins gradually so how do we define where space starts? Is the significance of ~100km something to do with the effects of gravity at that altitude from the Earth's surface? Does ~100km give you weightlessness? Or is Blue Origin going up to ~100km because it's a nice round number that is roughly (whatever that means) in space. But aren't kilometres completely arbitrary?

Also, can people please stop knocking Blue Origin. We get it at this point, okay? I'm a huge fan of SpaceX and Elon Musk but does Blue Origin have to lose for SpaceX to win? No. There's nothing in this post from Bezos bashing SpaceX as far as I can see. There's simply saying, look, we did it again with the same refurbished rocket. Good on them. May they do it again and again. And so may SpaceX. The next space race is on, happy days!

One would think that using fuel to touch-down slowly is wasting fuel since they could use some kind of capture scheme with a parachute instead. I've read many times that the weight of the fuel is a big problem in spacecraft.

This is awesome. A great compliment to the work SpaceX is doing. To put it in perspective this rocket went about 100x as high as an average international airline flight but would still need to go about 4000x as far to reach the moon. Not sure about the 3 mile per hour impact with the ground on my way home from work. I suppose with a nice soft seat it would be fine but by the time United Airlines gets done with it you'll be packed in like an NYC cross town bus with a seat just as hard.

I was thinking: Docker. Hmm. Containers. Hmmmm. Xen developers. Hmmmmm. Seemed really boring until I saw " Anil Madhavapeddy, the CTO of Unikernel Systems." Oh... I know that name: it's on quite a few IT/INFOSEC papers I stashed and shared over the years. A smart researcher with a practical focus. Didn't know he was CTOing at a startup.

Yeah, Docker is about to get some enhancements for sure. Maybe some real security improvements, too. You can count on it.

I used to be in the unikernel camp of "this is the next step in virtualization tech", but having played around with both containers and unikernels, and now developing with containers, I think unikernels are going to occupy only a very niche space.

There are two touted benefits of unikernels, performance and security. Performance turns out to be a red herring, as the overhead of an OS vs a Hypervisor turns out to be roughly equivalent (with the OS actually winning in some use cases).

Security is definitely an issue, but it's so abstract. My company is a compliance (a very specific industry's compliance) cloud provider and we have gone with Docker as we get to use the OS as our Hypervisor, which means it is much more extensible and, in our use case, secure as we are able to auto-encrypt all network traffic coming out of the hosts with a tap/tun virtual device.

Two things need to happen to make unikernels attractive. A new Hypervisor needs to get made, one that is just as extensible as an OS around the isolated primitives. It should also have something extra too (like the ability to fine tune resource management better than an OS can). Secondly a user friendly mechanism like Docker needs to happen.

I'm not very hopeful given that their CTO is quite open about wanting to embrace-extend-extinguish competing technologies. This move embraces unikernels, and now they are perfectly positioned to go the rest of the way.

OCaml is a fine language that most people don't use. If I want a unikernel in my own language, do I need to build one myself? I wonder if someone is building a unikernel that have external language bindings, which will allow one to create "High-level" unikernels. This will open up the possibility to completely bypass the installation of language runtime. For example, I can just type some Python code into a browser editor, the backend can take the source code and fork a Python unikernel to run the code. Docker can currently do this but one still has to rely an underly OS to manage all the packages etc.Wouldn't it be nice if you could simple write "import xyz", and the unikernel takes care of fetching them automatically?

Docker has some really smart people leading it to success. But make no mistake, the economic thesis of Docker depends on a massive landgrab of vendor lock-in. This acquisition is a hedge against any Unikernel company looking to make the same landgrab.

There's a reason Docker is so heavily funded by the biggest cloud companies. They're the ones who stand to benefit from specialized Docker container optimized for their own platform. It's a great way to package open source services and leverage the effort of the developer community into centralized profit.

It seems blatantly obvious that Docker is looking to build the app store of devops. I wish them the best of luck, but they are going to face some heavy resistance from open source initiatives. There is nothing about Docker that makes it fundamentally superior to the systems it's based on, specifically the LXC project. When developers finally wake up to the fact that they are sleep walking into a massive walled garden, Docker will lose some of its clout.

So far Docker seems to be a good citizen when it comes to FOSS, hopefully that will continue.

I've been following the Mirage and rumpkernel lists for a while and its nice to see these hackers getting traction (and money!) for their efforts.

Not too long ago unikernel.org was started, which IIRC was billed as a community driven "one stop shop" for information on the subject, which I assume is independent of the company "Unikernel Systems". Hopefully Docker won't go rogue and start attacking others that use the term "unikernel" by claiming that its trademarked or something like that.

I have been following unikernel development for sometime. The work done by Atti Kantee and others on Rumpkernels [1] is most promising and has the right abstractions (POSIX userspace using NetBSD stack). Also, in the demo video, unikernel folks should acknowledge rumpkernel work as they are using it :)

Our latest distributed database uses a mono kernel too. We use Pure64[0] to boot the system and then the "kernel" is derived from QK[1], but it's also just our database software.

Other than reducing complexity, our distributed database use the virtual memory hardware in a unique way, so a mono kernel was essential.

Having said that, the easiest way to develop such a system is not on the bare metal, it's by running Linux in such a way that in only uses the first 1 or 2 cores, and then running your "custom kernel" on any other cores in the system. Then you can use a normal debugger and utilities during development. It's only when you actually want to put it into production that you can consider not using Linux at all.

I'm curious about how support for building unikernels will be integrated into Docker. The current Dockerfile-based build process doesn't support separate build-time and run-time environments, but when building a unikernel, the build-time environment is completely different from the run-time artifact. Support for separate build-time and run-time environments is also useful when building container images, so the image doesn't include things that are only necessary at build time. So I hope that problem is solved first; I think the addition of unikernel support will be more natural that way.

> The result of this is a very small and fast machine that has fewer security issues than traditional operating systems (because you strip out so much from the operating system, the attack surface becomes very small, too).

Obviously traditional operating systems provide a lot of interfaces that represent attack surface, but they're generally able to be secured. On the other hand, much of the operating system actually _implements_ security, so if you throw it out, you're losing that.

Very nice site! Since your site is so much based around search, I thought I would pass on a few suggestions based on what I saw. If you happen to be using a search based engine for your content such as ElasticSearch, SOLR or maybe Azure Search :-), there are a few simple things you could add to make the experience a little smoother. Suggestions in the search box are nice to allow people to quickly see results as they type. You could even add thumbnails of the images in the type ahead such as you see using the Twitter Typeahead library (http://twitter.github.io/typeahead.js/). I also noticed that your search does not handle spelling mistakes or phonetic search (matching words that sound similar). Finally, through the use of Stemming, search engines can often help you find additional relevant content. For example, if the person is looking for mice, but your content has the word mouse in it, this will bring back a match. Since you don't have a lot of content, this can really help people find relevant content.

I like to think I am very conscious of copyright. I might not always adhere to it in my person life (who can claim they do these days?!) but professionally, everything is done strictly legitimately. With that in mind... Am I the only person who is slightly uncomfortable with the phrasing around PD and CC0? With other copyright licenses there is somebody there is saying they own something.

I'm particularly uncomfortable with Flickr's "no known copyright restrictions". What if people infer PD from that and upload it somewhere else under CC0? Then it gets sucked into this finda.photo? Yuck.

As for finda.photo, why are you truncating the source down to just a domain name?! Many of the sources include proper uploader details so why aren't you copying those over and displaying them?

I know you're not required to, but attribution isn't a bad thing if you can give it. I for one would be much happier using a photo if I knew exactly where it came from.

When of the about pages says that the photos are on a GitHub repo, which sounds really cool, until you follow the link and the repo hasn't been shared yet. Hopefully it's just a matter of time before it is shared.

http://finda.photo/image/14847 - Tags are weird. This is not a dog, mouse, canine or feline. It's not sitting. It has 'eyes' but I think that might be irrelevant. Although I would agree that ferrets (not an included tag) are cute, I'm not sure I'd describe them as domestic. Otherwise, great!

If you're still a little concerned with licensing and copyrights, I would recommend taking a look at www.graphicstock.com - you just play a flat monthly or yearly fee and you can download as much as you want.

Disclaimer: I work for the company behind GraphicStock. Oh, and we're hiring!

Here is the bottom line -- if smartphones can not be securely encrypted there are a lot of things we can't use them for:

- Phones aren't going to replace credit cards- You will need to type in all your passwords each time you use them- Two Factor authentication will need to be done with a different device- Healthkit and other medical records will need to be moved elsewhere- Any profession where there are very serious consequences for leaked communication will no longer be able to do it through their smartphone (lawyers, doctors, executives.)

Basically losing or having your mobile phone stolen will be equal to a burglar pulling up to your house or office and driving away with every sensitive document and record in the back of a van.

No tech company wants to see the end of the mobile revolution. Forget the national interest side to this, anyone supporting broken encryption basically looks like a total moron.

Assemblyman Jim Cooper represents Elk Grove a city of ~160K just South of Sacramento. Apple is the second largest employer in Elk Grove [1] and currently expanding their footprint there by several thousand jobs [2].

Same problem pretty much with the NY bill.Buy a phone that is unlocked / decrypted at the time of sale.The next step is for the user to login and encrypt. I don't see how this bill actually fixes that.I guess this hinges on the definition of authorized when it comes to encrypting something I own. I hope I don't require authorization to do this.

A few questions I posed to the NY senator earlier this week:

1. Would you use such a phone knowing that the government / apple / seller of the phone could easily get into it.2. Would it be legal for someone in the legal profession to use such a phone without being disbarred for negligence of the right to private communication?3. If sold unlocked, and then later locked (i.e. every phone right now), where's the change?4. Where's the 4th amendment fit in with this?5. What should we do with old phones that don't support this? Dump them in the bay I guess?6. Where are the technical experts that are telling you that this is actually feasible to do securely and safely? I'm looking hard, but only seeing negative responses from those that know what their talking about.7. Who's responsible for fixing the broken device once the master key gets leaked? The manufacturer? The state of {CA/NY}?8. the list goes on.

For a long time gun owners have had the singular pleasure of having massively intrusive, incoherent regulations written by people with no technical understanding of the subject matter. It's nice to finally have some company.

This is the game. There will never be a "Prohibiting Encryption and Preventing Privacy Act." It will always be a ostensible act of patriotism and protection. Combatting terrorists, child molesters, sex traffickers, drug cartels, money launderers, and other easy-to-demonize scary folk.

Cryptography yields two components: encryption/decryption, and authentication. Break one of those, and they're both broken. And that's what really bothers me about all of these politicians who only fixate on the encryption part. They're oblivious to extreme risk introduced by breaking authentication.

When the legislature wants to do something unpopular (or even stupid which is what this is), associate it with the "Evil Of The Era" and propose the bad legislation as the solution to said evil. These days, popular "Evils" are Human Trafficking, Child Porn, and "Terrorism". The first two evoke extreme emotion of crimes committed against the most innocent of victims, so they're the best choice in this scenario. In the 80s-90s it was anything to reduce "Crack Babies" or win "The War on Drugs".

It's an old trick -- when people talk about logical limits placed on the first amendment, you'll hear the phrase "Shouting Fire in a Crowded Theater". Most of those who utter it don't realize that this phrase originated as part of a ruling that had nothing to do with "fire" or a "crowded theater" but was made to curtail the dangerous speech of opposing the draft during World War I[1].

So the 2nd crypto war has move beyond mere fighting words. The long term battlefield is usually the court of public opinion, so I hope Silicon Valley recognizes this challenge to their power. Tech firms should have been attacking this rhetoric hard when it started, but accusing politicians of not understanding math/crypto has been a common response.

Do you want crypto to work? Or do you want to be forced to replace crypto with security theater? Is your business actually willing to actively protect a free internet? Or is it easier to assume this is "someone else's problem"?

I guess we will see which companies defend themselves, and which companies think being a collaborator is more profitable?

so basically there should be 2 components sold separately - "dumb GSM connectivity module" and "smart OS module" (iPod basically). The latter not having cell phone connectivity wouldn't be subject to that law and thus can have FDE/whatever. The GSM module can just attach to the "iPod" back like external battery.

The "shall" wording is going to keep this in courts for years, even if it does pass.

Shall is the source of more litigation than any other single word in the English language. It can always be debated because no one knows if it reliably means "can", "must", "may", "might", "will", "should", "ought to", or "is allowed to".

All the above uses can be supported with evidence. Because language evolves.

It's killer word for any law or contract and guaranteed to be disputed.

I am not a lawyer, btw.

But if this somehow passes, it will get tossed because of the wording.

Where the hell is Anonymous in all this? Shouldn't they be out there doxxing and haxxoring and whatever it is that they do to these kinds of people? I'd figure if someone stands up and says "Encryption should be illegal", they probably don't encrypt jack shit themselves, and they're probably easy targets. They might even take the hint and say "shit, I should have encrypted my internetz" and change their stance. Eh, doubtful.

Someone needs to make a big deal about how this is bad for business because it allows the Chinese/Russians/French/Welsh/whoever to steal American Innovations(TM) and then write to whomever this person will be challenged by in upcoming elections with "x is anti American Business" talking points. Both sides can play "Won't someone think of the children?"

I'm waiting on the day something like this gets proposed in all of the EU states, for the same BS reasons.

As a matter of fact, I'm certain that current leaders of the EU countries who publicly invited immigrants to their state (we all know the most prominent one), was considering this as a easy way to change the privacy laws - and be applauded for it.

I'm a black guy working in IT at a company in the top 5 of the Fortune 500. This is complicated issue. There have been experiment that show having a "black" name lowers your job prospects [1]. Other studies that show:

"...race actually turned out to be more significant than a criminal background. Notice that employers were more likely to call Whites with a criminal record (17% were offered an interview) than Blacks without a criminal record (14%). [2]"

So all the people acting as though our society is some meritocratic, utopia can keep that bullsh*t to themselves.

On the other hand, there is no doubt that blacks under-perform relative to whites when it comes to academics. There are obvious reason for this, but those reasons don't change the truth. Companies that are heavy on the engineering are going to use academic markers to try and select the best of the best. There aren't enough blacks >= white peers in the top percentiles of CS to give us proportional representation.

I am a non-minority, CS graduate of a HBCU. There were amazing hackers in my program. I believe there were around 200-300 declared CS majors across all levels when I graduated. Numbers are down from those peaks now.

My institution was heavily recruited by big corps, government labs, and east coast companies.

The "best" students, by GPA, were in high-demand for all of the above. Many were heavily recruited into management tracks for non-IT companies. A large number of government institutions and defense contractors were also eager to land new grads from our school. The "best" students, by hacking skills were (maybe stereo-typically for hackers) less interested in classes that didn't involve slinging code, but also all landed programming gigs. Less committed students, from either metric, seemed to still be getting jobs but I can't generalize as to the job type.

I think it is fair to say that my undergrad course-work was not as demanding as (guessing a bit here) Stanford, MIT, or CMU. But my GPA and GRE scores landed me multiple job and graduate school offers.

One aspect of hiring from (or at least my) HBCUs is that there is very strong network effect - alumni come back to the school and recruit interns and fulltime positions for their companies, help prep students for the process, and students looked to those alumni as trusted sources.

If Silicon Valley really wants to hire from HBCUs, that is the path I would recommend. Hire a few alums from the HBCUs and make recruiting and grooming candidates a priority for those alums.

"as the only African American on her team, she didnt feel she had much in common with her colleagues. When I went out to lunch or something with my team, it was sort of like, Soooo, what are you guys talking about? she says"

I find this sentence really shocking, perhaps because I'm french and in France we try to assimilate people more (I don't really know), but I would definitely think that as a white software engineer I have a lot more in common with the black software engineer working with me than with whatever random white dude.

> One senior, Sarah Jones, ... There are not a lot of people of color in the Valleyand that, by itself, makes it kind of unwelcoming."

This statement may be true if "people of color" means African American. Otherwise, it is just not the fact. I do think, through my personal experience, the Valley is probably the most diverse place that I have been. I've seen people all over the wrold here: Asian, Latino, European, etc.

> When they started interviewing seniors, companies found as Pratt did at Howard that many were underprepared. They hadnt been exposed to programming before college and had gaps in their college classes.

Is it that they don't hire black coders? Or is it that there are very few black coders to begin with? African Americans make up 13% of the US population and they graduate college at a lower rate than other ethic groups.

It would also be interesting to look at selected majors across ethnic groups. I suspect that blacks go in to CS at a lower rate than other ethnic groups.

Looking at the commentary at Hacker News, I would say that the author has done a disservice to the topic by ignoring the minority status of East Asians, South Asians, and Latinos.

With very minor changes, they could have used the correct words to cover the topic they really wanted to cover: that there are fewer black programmers in SV than is desired/expected/needed. And that is a topic that deserves discussion. But because the author minimized the experiences of a huge number of other minority groups rather than focusing on the concerns at hand, we are now squabbling about essentially irrelevant material.

95% of the article could remain intact. By cutting the 5% which is both fluff and offensive to other groups, the rest of the article would be much stronger.

As a human that people in America would call black/African-American I really dislike articles like this. I understand that some black people feel like they cant relate to others, but I find the entire premise of such arguments about homogeneous workplaces and cultures completely ridiculous. The culture of 'black' people in Alabama will be very different from Howard or Washington DC, does that mean Alabama is unwelcoming ?

Secondly, who cares what schools top tier companies are targeting. If Howard is churning out good software engineers that are so good they cant be ignored, a) they wont need Google, et al to hire them b) their skills will speak for themselves when they apply for a job

It seems like so many people (black, white, Asian, etc) actually buy into this socially constructed division by culture or skin color which is completely insane to me. To me it's like dividing people into groups by eye or hair color and saying you feel unwelcomed by the blue eyed people.

Articles like this seem to reinforce the notion that there is this 'otherness' of culture and skin color. If Google ,Facebook, etc are ignoring software engineers that are top notch from Howard and other historically black colleges, that would be a problem, but I doubt that is the case. Most companies want people that can get the job done well and know their stuff in my experience (I've worked in Silicon Valley and Fortune 50 companies).

The article seems to repeatedly make the point that the black people at tech companies were feeling out of place while working at Google, etc. as if any Indian, Asian, or White person do not experience the same thing (someone from India will have to learn the culture of SV just like someone from Howard Univ. or some white person from Alaska). Who cares if you dont watch the same TV shows or read the same books. If anything , I think thats a good thing, as its a starting point to learn more about something you havent experienced. I think the most important thing is mindset and attitude going into situations like this. Curiosity and open mindedness would do wonders for the people in the article who feel like 'others' in SV.

I dont feel like the culture of SV is as homogeneous as they are trying to project, but this 'otherness' is the real projection

I've never been openly been discriminated against, or felt like the color of my skin had anything to do with my success in Silicon Valley or the East Coast while working at tech companies. I've found almost all people of all 'races' to only care about competence and efficiency (other than the occasional jerk or misanthrope)

I've only been to SF twice, but I have to agree that it is pretty white. Not as white as Colorado, but still. I also toured Medium and was blown away by the lack of black folks working there.

It's tough to describe the feeling, but when you're the only black person in the room, you do feel different--a little uncomfortable. However, I don't think this reflects a conscious effort to not hire blacks, rather there are institutional and socioeconomic barriers that leave us underrepresented in tech and many other fields.

The real story here seems less about race and more about how the CS program at Howard was mediocre (or at least didn't produce students that met Google's expectations) and one of the professors with a background of working for Google and attending an elite CS program realized it's deficiencies, improved it, and was able to get a bunch of students hired by Google by filling in their knowledge and experience gaps.

The school happens to be historically black but I'd be surprised if you found hiring statistics from a majority white school with a similarly ranked CS program to be substantially different.

It's interesting that it describes Silicon Valley as being too white, when it seems like there are quite a few Asians. Even working elsewhere, a high percentage of our programmers are Asian, higher than the metro's demographics would suggest.

Black engineers would prefer to work in IT at a big bank with high steady pay than opt for the highly variable risk/return profile of being an engineer in SV.

And why do they do this? If you look at poverty being an overriding theme for blacks in America, even if they themselves are not poor, then one would clearly prefer a lower risk med/high reward job than an a high risk low or super-high reward job.

Now the above only explains why black Americans underparticpate in start-culture. It says nothing for why they are underrepresented at high paying low risk shops like FB, Google, YHOO, Salesforce, ORCL, etc. Unless, of course, if you need to have first slugged it out at a few start ups before getting a job at a bigger shop. I'd say that's maybe only true for parallel hires and not kids right out of university.

I would say some of the students in the article probably aren't SV material - "Theyd begun studying computer science in college"? So you've been involved with computers for 4 whole years and you think you're qualified for a top tier job? I'm sure not all of the students only started in college - I'm also not naive enough to think there isn't prejudice in SV, but didn't start until college...

I have a few minor issues with this article/approach at Haward. First of all, it makes it appear that companies like Google and Facebook are the end-all-be-all. There are more companies than the top tier. In fact, there are also companies outside Silicon Valley. Not only are they setting up students with potential failure, but they are painting a different view of what the industry looks like and where it is located. They are even discounting NYC, which is just a train ride away.

Also, the problem with Howard not being a top tier school applies to every school that is not in the top tier. Many do not even get the same access to recruiters that Howard does.

I also believe schools should be teaching fundamentals and theory and not be used as job training.

That said, the assimilation problem, "cultural fit", is real, but is often neglected. Many programs trying to fix the minority inbalance simply focus on outreach, the recruiting pipeline.

The slow progress reflects the knottiness of one of Silicon Valleys most persistent problems: Its too white.

It's actually too Asian. And too Jewish. That is, if you're using, you know, math, and a simpleton's understanding of demographics. If you're using contemporary ethnic racketeering, then yeah, it's too white. Even the NFL is too white.

I actually think it would be funny to see Bloomberg come out with an article demanding that fewer Asians and Jews be hired wherever they excel.

What surprised me from the article is that only 8 out of 10 students at Howard are black. Howard is a historically black university and I assumed the percentage would be 90%+. I did some research and found out that the latest numbers I found were 91% "Black or African American" students at Howard. http://www.collegefactual.com/colleges/howard-university/stu...

Offtopic - but Howard has an amazing marching band. They played Rutgers when I was in school (football), and my favorite part of that game was the Howard band at halftime.

The biggest issue I have with the article is that it presents SV as the only place you can go to be successful with a CS degree.

It very well could be that the article misrepresents the efforts of Prof. Burge and the Howard staff since the article is focused on SV.

But if SV is turning away energetic, engaged, intelligent and capable new recruits, then please send them to NYC, Seattle, Chicago, Philadelphia, Triangle Park, LA, or anywhere else where companies are looking to hire.

It might not give you a "direct impact" on SV itself, but it does get your people into good paying jobs where they can further develop their skills and experience (especially for those without a long childhood of working with computers). It's a small industry. Soon enough these graduates will be attending conferences and making an impact on this culture.

More importantly, they'll also be representatives in their local communities, helping to inspire the next generation of students who don't see themselves or their experience reflected in this industry. And perhaps that next generation will be more likely to pick up programming in middle school.

This is a very simple problem and racism has almost nothing to do with it:

White founder has a business idea and they bring along their friends - most likely white. Those friends bring in their friends and colleagues - also most likely white - to become the executive team. The executives hire tomorrow's managers. By that time the vast majority of employees are white, and even if they work very very hard to hire black people, it will take a very very long time until there is proportional representation all the way up to the executive level. Some execs work well into their 80s, meaning that it could take more than a century until there is population-proportional diversity at any predominantly white-founded institution.

The longer a lack of diversity persists in a company's trajectory the harder it becomes to fix it. The only solution I can think of is for black people to start more companies themselves.

A good article with a horrible, clickbaity headline. Howard's CS department head and even the students seem fully aware that "the problem" doesn't lie in evil Silicon Valley HR departments, but in the challenges of preparing kids who haven't coded until college.

I am a Howard University Alum of the Computer Science Department. My experience at Howard was an eye opening experience. I went to Howard because I got track scholarship and I wanted to get the "Different World" TV Show experience. As a first generation Nigerian American, there was a lot of diversity in the sense that I got to meet black people from all over the world. I even got to get the chance to learn about my history. Also, when I graduated a lot of my classmates went on to work at Microsoft, Goldman Sachs, or other Fortune 500 companies. Google IPO'ed a year earlier and wasn't really on campus. Google and Facebook would get students a couple of years after I graduated. I know a couple of those students that are doing well because they got in early at Facebook. There is a decent amount of Howard Alumni at some of the tech companies. Anyways, what Dr. Burge is doing is great. His focus is to get more students to work on projects and get more tech companies on campus. More people in DC and the US are helping as well.

I'm not American but I've lived here for a quite some time and this is my observation purely from my outsider perspective. Ask yourself how many times you see African Americans in a group of Asian, Latino or White people? I think the answer is very simple - Black people segregate themselves not just from White people but people of ANY other race. The majority stay in their cliques and never try to get out and the general consensus is "Why even bother?" I mean, the young lady in the article says she doesn't fit in and all that goes through my mind is how is this anyone's fault?

> Pratt also noticed that many advanced classes at Howard and other black colleges werent as rigorous or up-to-date as they were at Carnegie Mellon or Stanford. By senior year, students risked falling behind their peers from other institutions. Id ask faculty members, Why are you teaching this course that way? he recalls. And theyd say, Well, Ive been teaching the course for 25 years.

That's the core issue right there. The school hasn't adapted to the technology and practices. What use would you be on day one if your coding knowledge was stuck in 1991?

There are not a lot of people of color in the Valleyand that, by itself, makes it kind of unwelcoming

i dont like this attitude, i am not sure what is should be called, but you should feel relaxed in your own skin, accept diversity, and dont mind it when most of the people around you, are not the same color

why is it unwelcoming ...maybe not what you hope for, but why call it so negatively

Look, the US is filled with businesses in big cities that, while are not "software companies", do very much need to write software to conduct operations. Take Houston, Texas for instance. It doesn't matter where you came from, what you look like, or who your daddy is. The game is supply and demand. If you can supply, you are in demand. If you are a native English speaker, then you are already ahead. In this country, if you are willing to move, work your ass off, and actually like programming - eventually you will reach gainful employment. Especially if you can pass a drug test. The first year? Hell no. Look for the hardest shit you can find that people with no patience think they are "too good for", and you will be filling in your experience in no time. Life is not easy or fair. If you are smart enough to do even some half ass programming, you have been given a gift.

Some interesting points in the article picking up on the mono culture in a lot of software companies and especially games companies, in my experience outside Silicon Valley, I think it's a programming thing in general wherever you are. After many years I'm quite tired of the endless Star Wars talk, references, and T-shirts that I have to endure from colleagues. I even like the Star Wars movies but there comes a point where I think we could surely shut up about it for one day. However, I have to endure it, I've worked in numerous companies and it's the same thing all over the place, the blah mono culture, I'd love to work with some of the people in this article.

I started programming at a top computing university where my father was on the faculty while still in high school. I was working with other high school students and college students who were very passionate about programming and computer hardware. I remember working 80 hour work weeks during summers and breaks learning an enormous amount from fellow students as well as faculty and researchers.

Generally, if one wants to get into computing in a highly competitive environment, they should attend a top computing university. Fortunately there are top schools that are public as well as private.

I rarely see blacks (or Hispanics) at computer Meetups in NYC. For that matter, at many computer Meetups, there aren't so many women either.

I'm a gay black guy who went to SMU via scholarship and majored in finance. Coding is my hobby. The New York Times hired me to write code for them. I worked there 2.5 years. We are in the tech already. Silicone Valley needs to pick those up who are already here. Some of us are more than willing and capable to work with Silicone Valley.

I think it's similar to the issue of women in comp sci. It's a recruiting and culture problem in the whole industry. African Americans have more barriers than women imo, because statistically they have issues of poverty and less tech early age tech exposure on TOP of culture mismatch.

What a big steaming pile of political correctness. The author interviews a tiny handful of mediocre college students who blame their "color" for not getting hired right out of school by big, glamorous tech companies. Are we supposed to be sympathetic?

I remember my own struggles to break into the tech business, many years ago now. Although white and "privileged", i.e. no cultural barriers to entry, I found it very tough and had to jump through hoops, work my way up from semi-tech to actual development positions. I took night school courses on a credit card and got into debt. I bought whatever gear I could afford and stayed up until 3am writing code, then got up and went to my menial job.

The opportunities didn't just fall in my lap; I had to earn them. No glamorous technology titans came knocking on my door, begging me to come interview. I had to work for everything I got, and God, it was hard. It still is.

This same work ethic applied to everyone; I was on the chatboards in the late 80s, all through the 90s, and the 2000s, and the story is always the same. You have to have the right stuff if you want to build a career in technology -- be smart, creative, have some initiative, humility, humor, etc.

So maybe Black Americans don't get that in their upbringing. Maybe they're not taught to be smart, competitive, hard charging over achievers. Maybe they're not encouraged to be creative, to think outside the box, etc. I don't know. What I do know is, you can't compensate for that by handing people undeserved opportunities.

Affirmative action is a failure; it's nothing but a form of welfare. If Google reaches out and hires under qualified people from Howard or wherever, just to say it's trying to overcome "barriers" and achieve "diversity", that's all doublespeak that in the end means "We will hire a few token blacks because we have extra money. It will make us feel good, and it will fool them into thinking they made it. Whatever. We have to do it."

The issue I take with this is that American assimilation is a two way street. Every culture puts in and takes out. Tacos, sausages, pizza, sushi, these are American foods as much as they are Mexican, German, Japanese, or Italian. Some of them more American than their source in ways. It's not about forcing White Angle-Saxon culture. It's about forging American culture and identity.

"Google revealed that its tech work force was 1 percent black, compared with 60 percent white. Yahoo disclosed in July that African-Americans made up 1 percent of its tech workers while Hispanics were 3 percent."

Everyone is the direct descendant of Africans. That's not what "Black" means in the sense it's being used. The people you are talking about are culturally Dravidian Indian, not culturally African-American.

Computer programming is one of the most cognitively demanding professions in existence. Ability to program correlates pretty highly with cognitive test scores, such as the SAT Math. If you look at the top scorers on such tests in America, only about 1% are black. The ratio of black engineers in Silicon Valley matches what you would expect based on the test scores.

To see it visually, this is the bell curve based on millions of test results: http://i.imgur.com/zB1oENS.png?1 There are very simply very few black people in the far-right portion. This is not even a disputed fact (the dispute is mainly over why the curve is skewed and if it can be fixed; the existence of the skew is incontrovertible).

This was at least partly because of the way companies recruited: From 2001 to 2009, more than 20 percent of all black computer science graduates attended an historically black school, according to federal statisticsyet the Valley wasnt looking for candidates at these institutions.

The average SAT scores at Howard are thoroughly mediocre, on par with second and tier state colleges, and you would not expect an elite school to concentrate on Howard, any more than you would expect it to concentrate on Southern Illinois University or the like. The only reason an elite company would recruit at Howard is for diversity reasons.

Foreman is strong-willed, which sometimes gets him in trouble. I just chalked it up to soft skills, I guess, he says, explaining that he and his interviewer had clashed. Pratt says hed been furious to learn that Foreman had been passed over. Other companies said no, too.

So was he a good programmer or not? How do we the readers know that he good do the job and was passed up unfairly?

The phenomenon, stereotype threat, is getting more attention in the Valley, and companies have begun training employees to be aware of it.

She doesnt fit the profile of what people think of when they think of engineers. Even though people think of Silicon Valley as a big meritocracy, I dont think thats how it works.

There are now a number of companies that do automated programming interviews -- Starfighter, Hacker Rank, etc. Do these manage to overcome stereotype threat? Do these blind interviews allow through more African-Americans? Before throwing around slanderous accusations, one should actually show that Silicon Valley is treating people with the same programming ability differently.

The sad thing is that these tech companies cannot just admit, "We don't recruit at Howard because the SAT scores are not there." Rather these companies have to pretend that anyone can be a great programmer if they just put in the work, and a lot of people end up with false hopes that only get crushed.

Let's assume there is this black guy / girl sitting in the interview room with you. You (white) are asking interview questions and you soon realise that the person you are interviewing is excellent. But something is off, you can't really put your finger on it. What do you do?

Interesting. Although this point of view might be a bit simplified, there's another example of a weapon, which was both relatively cheap to make - and really hard to use unless the whole society was built around the skills required to use it.

Mongols! Light cavalry using composite bows was both unbelievably effective and hard to copy for everybody but steppe nomads. All Mongols were hunters, they practically lived with their bows on their horses. So the whole population could do warfare.

While back in the days, in both Eastern and Western Europe, contemporary warfare was rotating around heavy cavalry, and one can't have too many knights. Even if somebody managed to gather an army more or less comparable to Mongol hordes - heavier cavalry would just be meat for lighter riders making circles around them.

Besides, feudal lands never managed to be centralized enough to counter mongols. In medieval Rus' the need to centralize led to the rise of Moscow - and it took quite a while anyway.

Longbow was cheap and technically superior, but required training. Crossbow more expensive, required less training. Rulers of England less worried about rebellion, OK to invest in training. Rulers of France/Scotland not so happy because of fear to give potential of overthrow to the people (Scotland not in title, but in article along with France).

Perhaps an analogy could be painted with companies today. Those that churn, and those that nurture skills.

I see a possible problem with this theory: in few words, according to the authors the French and Scots did not adopt the longbow for fear of rebellion. Still the nobles participated to the very battles that saw them being defeated by the English and their longbow. If the longbow was cheap and relatively easy to adopt, a noble aspiring to the crown would have been able to develop the technology independently from the (unstable) central government and have an even easier road to glory against their own technologically inferior king.

I admittedly did not read carefully the whole paper, but this possibility does not seem to be addressed.

The intro still assumes that the battles won by the English in the Hundred Years War were due the longbow (alone), which AFAIK is quite debatable (at least outside of England, where Agincourt is a bit of a national myth, even more than e.g. the Black Legend).

And never mind instability, the French also weren't as geographically isolated as the English and thus it was easier to hire mercenaries. Genoese crossbowman being a particular example.

The penetrative ability of the longbow is also greatly exaggerated, citing a book that did some pretty shoddy testing (flat sheets of poor quality metal used as targets, but hardened bodkins as penetrators, 10m distance, no padding).

I think that the way this paper discusses on the longbow as being the superior weapon may obscure a key fact here. Man for man, a crossbow is a superior weapon; requires less skill to operate, has longer range, much easier to aim, better penetrating power. The main advantage of a longbow is how simple and cheap it is.

Speed of reloading is another advantage the longbow has, but I think this article overstates it. While some crossbows do require using a stirrup or crank to load them, there are others that you can reload against your hips, and shoot from there, to increase your speed considerably, at some cost to accuracy. I know people who have managed to get 6 bullseyes at 20 yards on a crossbow in 30 seconds. Meanwhile, archers would not be firing at the maximum possible rate in battle; ammunition is a limited resource, and with the draw weights of warbows fatigue would set in quickly. Overall, with the archers they had and bows they had at the time, it is likely that the longbows were able to be a little faster than the crossbows, but it's not a night and day thing; and the range, accuracy, and penetrating power on the crossbows were better.

The simplicity became an advantage in a few battles, which came after substantial rainstorms that caused problems with crossbows more complicated mechanisms. But the main advantage was how cheap and fast to produce they were; you could easily arm a large populace quite quickly. In order to take advantage of the longbow, you had to do that; you needed a very large number of archers to effectively take advantage of longbows, while you needed fewer archers to be effective with crossbows. But because it was cheap and simple, it was feasible to do that.

I think that cost and simplicity of the longbows were their biggest advantage; speed perhaps a secondary factor, but the sheer numbers were likely to be more important.

There is, of course, an interesting parallel here with some trends in modern military spending. The Joint Strike Fighter is a technological marvel; one of the most advanced pieces of military equipment ever. However, they are staggeringly expensive, and not actually the best dogfighters in the sky. You wonder how much more effective spending that money on more and simpler weaponry might have been.

I wonder if this practice of arming citizens with easily procured long-distance weaponry that in a less stable country would be feared to become useful in a rebellion also could be considered one of the memetic ancestors of modern American firearm culture.

surprised there's no discussion of the importance of chivalric values in the French military. French knights were so married to the idea of valor that employing yeoman infantry was seen as dishonorable. Obviously this is directly related to the political context of state security but it's worth considering the cultural factor as well

Ah, Agincourt-fetish, one of the few fetishes you can display in public without looking too ridiculous (in the English-speaking world).

Longbows are great in open battle, yes. But the hundred years war was a war of sieges and raids (by the English and the great companies), and for those the stonemason is infinitely superior to the longbowman.

For the French, the winning strategy was always to avoid pitched battles and fortify river crossing points until English armies had run out of supplies, then patiently retake lost fortified places through siege.

>Yet the Hundred Years War (13371453) lasted longer than a hundred yearsplenty of time for Englands enemies to learn that their defeats were heavily influenced, if not caused, by the longbow.

Didn't England lose the Hundred Years War? At least looking at the map before and after - it lost everything on the continent, incl. last remnants of Angevin and Normandy lost to France, with France rising up significantly bigger and stronger as a result of the war.

While long bow is a nice nostalgic weapon, the crossbow is technologically more advanced, and in our civilization technology wins :

"Plate armor that could be penetrated by large crossbows, but was impenetrable by longbows, was uncommon in Europe until about 1380"

(funny that while a child i was initially making bows, yet soon switched to making crossbows - and they were interesting until i made my first single shot handgun at the end of the 1st grade :)

Explains the large number of hackathons we have these days: "...a ruler who wanted to adopt the longbow had to create and enforce a culture of archery through tournaments, financial incentives, and laws supporting longbow use to ensure sufficient numbers of archers."

A historian friend claims this is bollocks. England was not significantly more stable, and the longbow was not a superweapon on its own. It was part of a system of combat, of combined arms, involving knights fighting on foot and choosing the right terrain.

And when they didn't have the right terrain, those English longbowmen also lost plenty of battles. They had some spectacular victories at Crecy and Agincourt, but they also had their fair share of losses.

It's an interesting paper but "politically stable" isn't the right term.

A population able to defeat the infantry technology of the time requires a different social and legal position than one that doesn't have that ability. That is, the government needs more cooperation and consent of the governed. That government is stronger than other governments, because it can kill their armies. But it is more dependent on that population and so cannot abuse it in the same manner as those other, "weaker" governments.

I've only read the abstract, so I'm not sure if this is covered, but the longbow requires huge amounts of practice from a young age to be effective. You had to be incredibly strong just to draw the string. A crossbow, by contrast, was easier to draw, and men could be trained to use it in a far shorter time. English Kings had constant problems with procuring enough men capable of using a longbow, passing all sorts of laws banning all sports except archery etc. Perhaps the French simply couldn't find enough trained men?

Last summer I did a workshop with the longbow. It's real fun, and has many links to meditation, finding your center etc. I had seen it before, never liked it, but it was great.

In the end I shot at a 1.5 meter target about 100 meters away. You could barely see it, as it was lying flat on a hill. At first I could not believe that I had the power to get that far, but it worked out. I missed it by about 12 meters, which was not bad looking at the competition that day.

Interesting discussion, including the comments below respecting how Barons didn't necessarily want an armed populice because it made it harder for them to stay in power. Does any of this sound familiar or applicable to today (gun control debate)??

"We determined the true solution of medieval war puzzle. The medievals truly had to reason just like we do. We cite no such reason in medieval literature sources, just use indirect proofs that we are right."

The Brits have had the advantage of a technologically superior political system for a very long time. I usually make an argument that is quite similar to this to explain their rapid and unprecedented rise to imperial splendor.

tl;dr -- The longbow is worthless in most applications except military. The monarchy made it compulsory to train in bow use, so anyone recruited for the militia was ready in some shape or form to use the bow, so in a relatively short amount of time anyone can be trained to use a longbow for battle. They forced bow imports and kept prices very low throughout England.

The rest of the world couldn't enact such rules and thus could not make the Longbow a successful military weapon. It required years of training, not something you can do to a soldier who just got conscripted.

Interesting read of the introduction - and very good points. Perhaps yew was also way too hard to come by / expensive for the Scottish?

Also, it does make sense that training long term military personel was reserved to the ruling feudal class. Still, producing a longbow compatible population (of strong, loyal) men might have had other costs then political ones. Precision wise, a longbow is not a real tournament weapon - and you needed tall, strong men to wield it.

I learnt "canne d'arme and baton d'arme" the "fencing of the i-gnobles".

From feudality to absolute monarchy the raise of monarchy has been made at the costs of "Jaqueries". Peasant revolts of the "non nobles" "ignobles" in latin derived french.

The central control brought by the carolingien and then the bourbon as resulted in strong traditions: knights and nobility are also a force to squalsh revolts.

This and the dissolution of Lances towards "regular armies" after azincourt defeat (longbow involved) has been used to cut the fraternity at arms between feuds members. (Lances were like organic units of versatile men at arms doing their best to bring everyone alive the local feud included).

The strength of the knight were enforced like in feodal japan, by preventing the crowd to gain power.

For this, metal was considered the weapons of only knights.

Which means that when using the old franc laws for something as rude as sullying a women in a church out of the accepted "traditions", the divine judgement could be called ... a duel.

Needless to say peasants were not authorized to have metal ... officially.

So with all the jaqueries going on, you don't really want the peasants to have weired ideas about efficient wooden weapons.

And still monarchy was a vast joke at this time and era, cousins of the royal families were lending each others money, and were often tight by blood.

England had no interest to destroy the french society.

French kings had no real interest in defeating england. They were mainly aiming for weakening the local suzerain. The feuds.

Of course it backfired. Louis XIV almost get killed during the "fronde".

- How many other content websites have published for nearly this long and yet have their oldest articles remain on their original URLs? Most news sites can't even do a redesign without breaking all of their old article URLs.

After 20 years, they still are adapting to the 'new' platform. Look through their site with fresh eyes: If you were designing a news website (rather than moving a newspaper to this new platform), how many design, UI and functionality choices would you make differently?

A quick start:

* The separation of different forms of content: They don't really mix text with video, images and graphics, even though most web-native bloggers will do it. They seem to lack fluency with mixing media; it's a project for them. They'll staple a video and decorate text with images and graphics, but they don't really commuicate with it; they don't say, 'here's how Clinton responded to Sanders:" <video>, or, 'here was the scene when the earthquake struck' <video>, or even in a movie review, here's what the scene looks like: <video> or <image>. Instead, they try to describe the visual with text. Even explanatory graphics are a separate, special production, on a separate page.

* The font in their title: Back when printing fancy fonts was a technological feat, this font communicated that they were serious and sophisticated. Now, if you step back and ignore the history, it looks like a kid playing with fonts. (Look at it this way: would you ever use that font on a website you were designing?). It says, insists even: We're anchored to the paper age and will never let go. We're the old, dying generation. If you want something new, go elsewhere.

* The discoverability of content: Obviously mimicing a newspaper, but a bad choice for the web. How many links are on that home page (scroll down)? And even more content doesn't even appear there. All that hard work and content, unlikely ever to be found, buried and lost forever. It's tragic. But that's what they did in the hard copy newspaper so I guess it's ok.

* Also, where are stories updated since I visited a couple hours ago? Oh look, if I look at every link a red 'updated' indicator is next to some links (just like the web 20 years ago!), which I see if I examine every one of them (and how do I identify brand new links in this massive page of links?) - but where in this multi-page story are the new parts? I guess I'll just re-read the whole thing.

I say this all out of love. They are an very important institution. The news business is hard enough; stop handicapping yourselves! From the outside they look like they still, in 2015, haven't fully embraced the new technology. What would you say about another business' web team (that was not adapting a newspaper to the web) that produced a site that looked like this? Egads. [1]

EDIT: Some minor edits and additions

[1] I'm not blaming the web developers; I assume they are working within the general constraint of: Make it look like the newspaper.

I like to call the incognito window my nytimes reader. I paid for the nytimes for a bit but it costs more per week than a monthly netflix subscription and it feels stupid to pay for not knowing how to use the incognito window. It's like a "I don't know how to use software" tax.

The NSA chief should be pro-encryption: the presence of backdoors in encryption (as demanded by some US law enforcement officials) creates a national security threat. Period.

If law enforcement needs access to encrypted data, they already have a few different ways. They can subpoena the data and throw the person who controls the key in jail until they release it, or they can just brute-force the encryption in cases of extreme national interest (it's too expensive to do for run-of-the-mill crime, but they have the capability if they really need it).

IMO the entire goal of encryption tech should be to make the government incur significant costs for every invasion of privacy they feel they have to perform. That way, they have the power to invade our privacy (and I don't think we as a populace can really stop them from having that power) but it's so expensive / cumbersome to use they really only use it in extreme cases. I'm fine with privacy being broken by the government on a case-by-case basis; the danger is when the government does a dragnet on everyone.

I would hope so. It's kind of their job. They've also published guides for strong encryption and best practices for operating systems for years. You dismiss their wealth of knowledge at your peril.

The FBI just wants to throw you in jail. What do they publish? Lists of people they want to throw in jail. Anything that stands in their way of throwing you in jail is bad, including your encrypted phone.

NSA chief can easily be pro encryption in public while breaking, subverting, and bypassing it in private as they always have. So, it's a smart position from a political standpoint. Action movie equivalent of being perceived as James Bond while pulling off super-villainy at the same time. Win win!

A play for political power. The NSA probably doesn't care if data is encrypted---they most likely already have a back-door to take data before its encrypted or after its decrypted. So if encryption is used, the FBI will be dependent on the NSA.

Wasn't this the whole situation with Dual_EC_DRBG? As far as I understand (which may not be that far when it comes to cryptography, admittedly), the NSA has already been caught intentionally weakening cryptographic standards via its influence over the NIST and by paying RSA.

There are two main points in this debate. Number one, we cannot abide having encryption weakened with back doors. Modern society relies on strong encryption. Number two, no amount of "magical technology" is ever going to replace human intelligence. The front lines in the war on terror is made up of human infiltrators and turncoats, not ones and zeroes.

He is most definitely not pro encryption. He is just against legal access by other agencies. He wants the NSA to have a backdoor into every possible crypto system and make them the organization every else has to come to for their data.

Several popular encryption schemes have been developed by or heavily influenced by the NSA (including algorithms mandated by FIPS and other government organizations), and there has been a lot of speculation that they added backdoors to AES and other algorithms.

So in reality they've had the ability to add backdoors all along, and it's in their best interest to keep it a secret whether they've added one, so it makes complete sense that their chief would say this.

"This CEO said the Valley used to be a place of "quirky people" but was now filled with "arrogant" people"

Valuations and market corrections aside, this comment is the thing that resonates with me the most. I got into tech and the Internet as a kid in the early 90s because of all the cool, intelligent weirdos thinking about and building the future. I've been traveling out to SF for about a decade now from NYC to do work, and it's been sad to watch that city go from a place I thought could be the only other place besides NYC I could live to a city I try to avoid. It feels like it's getting harder and harder to find those awesome weirdo hackers. The homogeny is brutal in SF.

The one upside is that it's still the Internet and I don't have to be there physically to enjoy the parts of it I like.

I'm in Chicago, far away from the action in SV. Lately, my friends and I have been getting unsolicited calls to participate in the next rounds of funding for various "unicorns." We are mostly just laughing, because it's farily obvious that we are only getting these calls now, after all these years of unbelievable[1] tech returns, because SV thinks that backwards and unprogressive midwesterners like us are going to be the goats buying the top of their grand pump-n-dump scheme.

I still don't understand why tech media (and the rest of the industry) are still not calling out these silly valuations for what they are - marketing spin.

With term sheets the way they are with liquidity preferences if I as a VC invest $100 million in a startup for a 10% stake at a 1X liquidity pref, the valuation I'm placing on that startup is $100 million and that is therefore what its reported valuation should be.

No need for any repricing of these startups - just report their true valuation as largest amount invested with liquidity preferences.

Edit: Just an additional point to add - If I really valued it at $1 billion I wouldn't need the liquidity preference.

The worst kept secret in Silicon Valley is that 90+% of the so called 'unicorns' are just donkeys with a plastic cone on their head.

It long since stopped being about innovation and disrupting markets and became mostly about shuffling piles of imaginary units around so a select few could get rich. Sadly the pawns in this whole game will be all the employees holding options that are soon to be worthless... and who were convinced to take those options in exchange for taking a proper cash salary.

The only good thing this time around is that (unlike the last big Silicon Valley implosion) most of these companies are still private. So it will be really messy for some private investors and the greater San Francisco area is going to have a mess on its hands, but the broader US economy isn't going to get impacted as much. The stock market of 2015 also isn't propped up by bloated tech stocks in the way it was in 1999-2000.

This is a popular idea on HN and just gets upvoted because people want to agree with this and it makes them happy for some reason, but this article is not a good article. It's basically just a quote. I think any kind of the-sky-is-falling article has the chance to get good traction on HN lately without offering anything meaningful. The discussions in this thread are pretty poor and the article is poor.

This is one of the most blatant cases of spin and justification I've ever read. VCs put "ratchets and downturn protections" in order to satiate founder greed? Ha! More like VCs waved huge valuations in front of founders and said "don't worry about the terms, this is good for everyone."

If nothing else this article does a good job of demonstrating why its important to always check your sources and their biases.

I was a Web Developer during the first dot-com boom, and now that I work as an iOS engineer during this boom, I'm shocked at how many of the same mistakes are being repeated.

On almost a daily basis I get recruiters contacting me with interview offers from companies who have no chance of surviving past the end of the year. Their messages are often accompanied with bravado about their company's VC backers' other, more successful projects, which only makes me more skeptical.

I've always envisioned most tech businesses in general as an elephant standing on a board with a bunch of mice underneath it. The mice keep the elephant moving, but if the mice aren't continuously fed, they slowly die off. When enough mice die, the rest can't support the elephant and it thus squishes them. The elephant is shot shortly thereafter because it stopped moving. The mice that left sometimes come back to feed on the elephant carcass.

> There are about 144 unicorns right now. If only 10% break out, that's only 14 companies that will really make it.

Doesn't this ratio seem about right for any basket of unprofitable (or even zero-revenue) high-growth companies regardless of valuation? If those 14 winner companies average greater than a 10x return then everything pans out as expected--lots of risky investments together produce a reliable if more modest return on investment.

It seems like the only abnormal aspect is the size of the valuations, but that might be just what happens in a low interest rate environment--too much money chasing too few deals. Whether this affects this success rate of these investments remains to be seen I guess.

You have to keep in mind the source. Let me put it this way. A VC's job is to buy X (where X is equity in a startup). Of course every VC would love for X to go on 50% sale ... or even better 75% sale.

You saw this with hedge fund managers and the stock market as well. Lots of hedge fund managers went on and on about how irresponsible Bernanke was because he kept interest rates low which raised asset prices.

To be clear, I don't think this is a nefarious or even conscious process. However, I think if someone really wants a particular scenario it tends to color their thinking.

Also the actual claim made isn't as sensational as the headline. Just says that 90% might take a lower valuation. All that requires is a general market decline.

What's missing from this guy's evaluation: whether or not any of these unicorns are close to or at profitability, and what their actual market potential is.

If the businesses are based on bullshit, then he's probably right. If the valuations truly are wildly out of control (and it does seem like it), then sure, they're due for a correction.

It's also worth keeping mind that if there are 144 of these startups, 90% of them is 129. That doesn't add up to that much money in SV terms. This seems like a tempest in a teapot. People love to make headlines, it seems, with "OMG bubble OMG!"

He says there is "blood in the water," and we are entering a 90-10 situation for the unicorn class of startups with billion-dollar valuations in which 90% of the startups will be repriced or die and 10% will make it.

Well, that's kinda how it's supposed to be. That's why expected value is more important. A few $200+ billion Facebooks and Googles can compensate for a lot of smaller $1 billion failures.

The unicorn is not a new buzzword but the frequency of it popping up the last several days lets me assume there is a unicorn inflation underway. Put on your muck repellant and get ready for bursting bubbles full of unicorn blood.

Repeat of the DotCom bubble, a few startups make it big then everyone and their aunt tries to emulate it with slight variations or solutions to a problems that don't exist. This app bubble is very pregnant and about to deliver something awful.

Does anyone try to count equivalence classes (after rotation or reflection) instead of raw board positions? To my mind, that would also be of interest if you want to know how many actually distinct game situations there are. I guess as a rough under-estimate you'd just divide the count by (4 rotations * 2 reflections)?

as a benchmark it seems would be interesting to port this to java/scala and running it on a spark cluster, since it's map-reduce from the post (didn't look at the code) it should be possible I would think

Although the determinate of that matrix = 0, and if it's conjugate transpose determinant is non zero then I wonder if all valid possible configurations on this board can be represented by a complex lie group?

Diacode team here. Thank you for sharing the post. This article is part of an ongoing series of blog posts that covers the whole Trello clone / tribute that our colleague @bigardone did.

We didn't submitted it to HN before because we were waiting to complete the whole thing and create a proper index for all the articles. The part 6 will be published tomorrow and you can expect a few more articles in the next weeks.

I'd like to clarify that this is not a product, it doesn't cover all the awesome features that Trello has. It's just a learning experiment that we're sharing with the rest of the world.

Finally, to give you some background, we're a small Rails dev shop (5 guys) who works remotely. We're now playing with Elixir and Phoenix and having a lot of fun with it. I'd totally recommend any dev to play with Elixir, specially if you come from a Rails background.

A tip to everyone checking the codebase (JS parts) to learn about building a Trello-like application, there isn't any optimistic UI updates in this tutorial app. (eg. after dragging a card to another list, displaying the card at the dragged position before receiving confirmation from the server.)

When you add optimistic updates to the mix with real time updates, things get much more complicated. Tracking pending updates, rolling back when something goes wrong, ordering of updates, reconciliation with server when the client is missing some updates etc.

I'm building something similar, and these have been most time consuming parts to build in a reliable way.

Hmm this is pretty cool but I'm a little sad my project, msngr.js, isn't listed (it was on HN 344 days ago and hit the front page briefly). Would love to see it stated what criteria(s) are used in determining the projects without me going through the source.

This is pretty neat and useful. A bit disappointed though that I didn't see my GitHub project listed. I would think you could build this list with a simple Google query "site:news.ycombinator.com link:github.com"

No, not "freely accessable online", not until the 8 year exclusivity agreement with Ravel expires. It's a pay service with a free tier.[1]

"Under the Harvard-Ravel agreement, Ravel is paying all of the costs of digitizing case law. HLS owns the resulting data, and Ravel has an obligation to offer free public access to all of the digitized case law on its site and to provide non-profit developers with free ongoing API access (Ravel may charge for-profit developers). Ravel will have a temporary exclusive commercial license for a maximum of eight years."

"For the duration of that commercial license, there will be a restriction on bulk download of the case law, with some notable exceptions. Harvard may provide bulk access to members of the Harvard community and to outside research scholars (so long as they accept contractual prohibitions on redistribution)."[2]

(My experience here is from back in the early 2000's working on getting pacer/states/etc to open up all of this data, so we could get it into google scholar and elsewhere. Often, they were willing to sell it to us, but they would not let us pay them pretty much any amount of money to make it just open and freely available, which is what we really wanted. Things have not gotten better, sadly, and in fact, have gotten worse)

They are projecting to have Federal and CA, NY, MA, IL, TX done in 2016, and the rest of the states in 2017. I'm curious why those particular states are being done first.

In particular, I'd have expected Delaware to be in the first group, because so many public companies are incorporated there, and so the decisions of its courts on corporate and stockholder issues have major national importance.

Offhand, I can't think of why MA or TX would worked on ahead of DE. Of course it is possible that the volume of material from each state is a factor...it could be that DE is being done in the first group but has a lot of material so won't finish in 2016. I've never taken a look at the volume of each state's output and so have no idea which state courts handle the most cases.

So one truly fascinating aspect of legal practice is that we tend to operate in the gray areas. However, the traditional way of researching case law reviewing a list of cases returned based on your query does little to help you sort through the mess.

With data visualization, you not only see the cases, but you see the relationship between cases, and how the cases work together.Among the most significant benefits, the data visualization elements of Ravel Law will help you narrow your research to the most relevant cases more quickly, while also helping you find those cases and arguments that, for whatever reason, didnt rank in the top of your search.

The value in this appears to relate concepts from one case to others through the visuals on the graph. The larger the circle, the more important the case will be. Lines connect one circle to another circle and its very easy to see which major cases are connected to other major cases. This is like a citator on steroids in my opinion as one can get to this point with a simple search. That means multiple steps in developing the analysis that finds the value and use of related cases. The snippets help immensely in determining which related cases are of value.

We just briefly spoke on the phone about your article (http://www.nytimes.com/2015/10/29/us/harvard-law-library-sac...). I am a Harvard College 04-05 alum, one of Professor Zittrains former students (I actually had to fight the administration to be permitted entry into his Law School course in 2001), and one of the first people Ravel tried to hire, because I am a programmer and I run a legal database called PlainSite (http://www.plainsite.org), which competes with them and receives about 16,000 unique hits daily worldwide. I was also a CodeX Fellow at Stanford Law School in 2012-2013, which is a program at Stanford that Daniel Lewis and Nik Reed are now also affiliated with. I tell you all of this only to point out that I am generally quite familiar with the principles, technologies and individuals involved here.

Ive now corresponded with Jonathan Zittrain and Adam Ziegler at HLS, the latter by phone earlier today. I have brought to their attention a number of concerns, none of which have been resolved in my mind. They are as follows:

1. Harvard University is a Massachusetts not-for-profit organization. Its investment in Ravel, a for-profit corporation, via its XFund venture capital arm, and its subsequent contract with Ravel to earn "proceeds" (HLSs term) from that relationship, involves profit. The University could in theory lose its tax-exempt status over this deal. This is not the same as the Harvard Management Corporation investing in for-profit corporations to further the Universitys mission by earning capital gains and/or dividendsthis is an exchange of cash for assets that Harvard claims it owns (even though case materials are public domain) and a contractual promise to monetize those assets through a for-profit company on an ongoing basis.

2. Worse yet, the deal involves profit from the withholding of public access to legal data, which is the precise ill that this relationship is nominally supposed to and claims to cure. In reality, it only exacerbates it by legitimizing, with all of Harvards imprimatur, the monopolistic legal information model that has dominated the nations judiciary for the past century and a half.

3. Professor Zittrain wrote an entire book on the dangers of internet lock-in and monopolies, yet his actions here are helping to create exactly the kind of monopoly he has become well known for warning about. According to Adam Zieglers recent post on the HLS Library blog (http://etseq.law.harvard.edu), there are to be "bulk access limitations" and "contractual prohibitions on redistribution." This is inconsistent with precedent concerning openness to court records and First Amendment law. That aside, what will these restrictions look like exactly? We dont know, because

4. ...Adam Ziegler told me that the contract with Ravel is not available for public examination and he did not know when it would be (if ever). He did read me a portion of the contract over the phone, which cited "non-commercial developers," and challenged me to come up with better wording. Thats easy. I dont know what a "non-commercial developer" is, but I do know what a "non-profit organization" is. As an individual, I am a software developer who is the CEO of a for-profit corporation in a joint venture with a 501(c)(3) non-profit organization which together maintain PlainSite. Does that make me a "non-commercial developer?" Although Mr. Ziegler insisted that the contract was not subject to interpretation because it is simply clear enough already, I strongly disagree, as I expect any lawyer would. All contracts are subject to interpretation. The contract needs to be posted.

5. One of Ravels investors is Cooley LLP, a law firm in the Bay Area. Based on what Daniel and Nik have told me in the past, Cooley has early access to Ravels software. Essentially this means that Harvard Law School is giving one particular law firm an advantage, which I imagine must violate a number of its own policies, and seems wrong on the surface.

6. Professor Zittrain claims it would have taken 8 years to raise the money that Ravel is providing for this effort. This is extremely difficult to believe. Although Mr. Ziegler refused to disclose how much money is actually involved, we can safely assume it is in the $5 million range given that Ravel has only raised just under $10 million and has had employees to pay for several years. Recently, a single donor gave Harvard Universitys engineering school $400 million, as your own newspaper reported (http://www.nytimes.com/2015/06/04/education/john-paulson-giv...). Harvard is also in the middle of a $6 billion-and-counting capital campaign, as reported by The Crimson (http://www.thecrimson.com/article/2015/9/18/capital-campaign...). Are we really to believe that the number one law school in the country (by some measures, anyway) could not scrape together the cash to buy its own scanners, or that it does not have scanners already? Are high speed scanners even that expensive? Heres one on eBay for $1,450:

7. Mr. Ziegler could not answer my question as to why a consortium of non-profits was not consulted ahead of time. I know many that would have been eager to assist, likely including the Internet Archive in San Francisco, which already has several scanners.

8. Though I do not speak for them, I did notice that Harvard and Ravel seem to have nearly appropriated the name "Free Law Project," which is actually a project and non-profit organization at Berkeley that took over from work at Princeton. See http://www.freelawproject.org and http://www.courtlistener.com.

9. The Harvard Gazette has falsely reported, "The 'Free the Law initiative will provide open, wide-ranging access to American case law for the first time in U.S. history." (See http://news.harvard.edu/gazette/story/2015/10/free-the-law-w...) I have been in regular contact with Jonathan Zittrain, Harry Lewis (an XFund Advisor who was Dean during my freshman year) and others at HLS about PlainSite since I brought the idea to them in 2011 almost immediately as soon as I started working on it. Additionally, CourtListener (from the group at Berkeley) has also been in operation for years, offering open, wide-ranging access to American case law. Theres also Google Scholar, which is free and certainly more wide-ranging than Ravel.

10. Ravel is, to the best of my knowledge, unprofitable. It remains unclear why Harvard would place its bets on an unprofitable startup, rather than solicit donations for a projectas it is so adept at doingin order to ensure maximum sustainability.

Mr. Ziegler attempted to dismiss the above concerns on the grounds that we still both agree in the greater goal of open access to law. I certainly have done all that I can to promote open access to legal information, including developing prototypes for digital legal data standards and suing the courts themselves (http://www.plainsite.org/dockets/29himg3wm/california-northe...). But if we both agree on this greater goal, then why has HLS been almost completely unresponsive to requests for cooperative assistance for the past four years, while this deal was being negotiated in secret?

To be clear, Harvard is not the only institution that has made highly questionable and insincere claims about its legal transparency efforts. Stanford CodeX claims to support open access to the law, yet it is now directly sponsored by Thomson Reuters, the parent company of West Publishing, and its "innovation contests" involve pledges not to redistribute case materials. But I would expect the Times to be able to distinguish between academic puffery and genuine efforts to improve the state of our incredibly broken legal system.

I went on to build Precursor (https://precursorapp.com), which uses Datascript (https://github.com/tonsky/datascript) to accomplish a lot of the things that Om Next promises. If you haven't tried Datascript, you should really take a look! It does require a bit of work to make datascript transactions trigger re-renders, but the benefits are huge. It's like the difference between using a database and manually managing files on disk for backend code.

My understanding is that Om Next will integrate nicely with Datascript, so you can keep using it once you upgrade.

I've spent the last few weeks building a side project in Om Next, and this article is spot on. Really excited to see CircleCI's plans to migrate, as it'll be fun to read their code and learn how they use it.

Relay and Falcor are great, but when I look at their docs it's unclear how to integrate with whatever backend I want (especially Relay). Looking at Om Next, it was totally clear how to write my own backend. The tradeoff is that everything is a little more manual, but that control gives you a ton of flexibility.

In a small amount of code, I have a client that can query financial data in a bunch of different ways, and if the data isn't available it sends the query to the backend, which executes it against a SQLite database and returns it to the client. The components are all unaware of this: they are just running queries against data and everything just works.

Combine this is with first-class REPL and hot reloading support via Figwheel (both frontend and backend) and I'm blown away at how fast I'm going to develop this app.

* core.async -- used for handling any kind of event dispatch and subscription. I do a unidirectional data flow type thing and it only took like 15 lines of ClojureScript.

This is one of the nicest front end development experiences I've had. Just the composition of these four libraries gives you a ton of flexibility and a good way to structure your application. You can use this setup to write a real-time syncing/fetching system with a backend database pretty easily.

Our team used Om for our app (balboa.io) for the first 3 months of development. We switched to Reagent and have been using that for the last 8 months.

We ran into the same problems with Om as the CircleCI guys, specifically:1) our front-end data model wasn't complex enough to merit a heavy-weight data access system that required a huge amount of extra digging to get right. We spent far too much time arguing about how to structure app-data, and it only got worse as the app got more complex. The cursor system in its first iteration was just too cumbersome (for exactly the reasons this author states). We kept trying to restructure the data model in order to get it to do what we needed. To be fair though, this is well known, and David Nolen has done a lot to alleviate this in recently releases (ironically by making it more Reagent-like).2) our app is end-to-end encrypted and requires pulling down potentially hundreds of blobs, decrypting them, and inserting them into the dom. Under these conditions, Om would kick it and the UI would grind to a halt.

We switched to Reagent, and found that it was far faster and "got out of the way" of development. Add-watch is amazing too. Our app is quite large (front end SLOC is around ~50k lines), and Reagent has scaled beautifully and is a beast at large-scale insertions (on the order of 1000).

Om has some delightful features (undo ability is very powerful, routes coupled with Secretary is also great for Om), and David Nolen is a genius, but I think even the author has to acknowledge that the app-data/cursor construct is more of a pain than it's worth...

I really like David Nolen as a conceptual visionary. His work with Om and core.logic is great and has inspired a lot of derivative work. But I would never rely on his libraries in production. It seems like he always gets to 90% before moving on to the next new thing. 90% documentation, 90% cljs->js coverage, 90% tested, 90% issues addressed. I wouldn't touch Om unless I was willing to employ at least one person to work on Om full time.

This is huge. I think it might even be the single largest problem to most projects' progress. I've seen a lot of projects that have tried to force non-tree data into tree-structures, and it never works out well. Projects grind to a halt after 6 months to a year because nobody can keep track of the dance steps they have to do with the tree-oriented code to manage their graph-oriented data.

Real, actual tree structures are just incredibly rare. Even some things that "obviously" seem like they should be modeled as a tree are far better off as a directed graph. Like databases of family trees - it's possible someone is literally married to their sister! Less cringe-worthy examples involving large families living near other large families with generational overlaps causing the children of one group marring the grand-children of the other, and vice versa.

You don't really need React. If you can do the ostensibly hard work of figuring out the DOM edits yourself, your app will actually be faster than if you're using React, i.e. React has its own overhead. As long as the data relationship was right, I've never found it difficult to manage state thereafter. It's when the shoe doesn't fit that things become a problem.

The problem is, we have a systemic problem of treating front-end devs as not "real" developers, not capable of forging their own paths. It's not just from the outside-in, I see a lot of front-end devs lacking a lot of confidence in their own skills. As a culture, we yell at any JavaScript programmer going his or her own way, building their own thing. "Don't reinvent the wheel!" they are told. Screw that. I can think of at least 3 times off the top of my head that the wheel itself was significantly and usefully re-invented in the 20th century alone. The problem is not "reinventing wheels". The problem is this institutional fear of making ones own decisions, leading people to think they need to learn everything.

react-cursor gives this pattern in javascript, immutability and all, but with regular old javascript objects. It also comes with all the same caveats as in this article. (I don't speak for the creator of Om, I speak for myself as the author of this library which was inspired by Om and Clojure)

The beautiy of the state-at-root pattern with cursors, is that each little component, each subtree of the view, can stand alone as its own little stateful app, and they naturally nest/compose recursively into larger apps. This fiddle is meant to demonstrate this: https://jsfiddle.net/dustingetz/n9kfc17x/

> The tree is really a graph.

Solving this impedance mismatch is the main UI research problem of 2015/2016. Om Next, GraphQL, Falcor etc. It's still a research problem, IMO. The solution will also solve the object/relational impedance mismatch, i think, which is a highly related problem, maybe the same problem.

I would love to see some code examples of this part "If we try to show a component that needs to know the current users initiated builds, that triggers an API call that asks the server for the data. If we stop using that component, we stop making that request. Automatically."

I'm currently knee deep in a react/redux implementation, which I guess is quite similar.

I am surprised their backend is written in closure. I would think it make hiring developers much harder (smaller group of people know it) and training people a lot harder. You can jump on to a project and learn enough go to fix bugs in a day or so (less than a week for sure). I am not sure the same could be said about closure.

> We could insert new list items into the existing DOM, but finding the right place to insert them is error-prone, and each insert will cause the browser to repaint the page, which is slow.

I dont think the last part is true. Browsers dont repaint (nor they reflow) the page until its really needed. So if you have a loop that modifies the DOM multiple times, but does not read from the DOM, there performance hit described by the author should not occur.

Benefits being low for most things, costs high, and risks...uhmm...nah. Only exception I can think of is very limited amounts of sensors (eg. is X on?).

What's the benefit of me turning on a gas stove remotely? Almost none. What's the cost of someone else turning on my gas stove? Really high. How much is the risk? Way too high.

Then there's smart devices, another component of IoT. But how much smarts do we actually want? Screens are nice. Making my shower multi touch isn't (capacitive touch + water = no bueno. Imagine water from hell scenario and no way of turning it off with your wet hands). Fridge compiling shopping lists automatically? Neat. Cheap android tablet that comes with a fridge glued to it? Nah.

The only utility I see is locally connected devices. Using your phone as a remote. That seems handy. To a certain degree, we have that. Extra points if I don't need to download an app for everything, because don't you dare tell me that your blue-tooth on/off switch needs a 15mb .apk. If I gave one about the 14.9mb of branding you're including, I'd download your press kit.

There's some utility in home IoT wudget-thingimabobs, but I'm almost certain we'll mess it up to no end in our excitement. There'll be some legitimately useful products coming from it, but most of it will be utterly cringe worthy in retrospect.

I'm not sure how many others share my view but I think that regulation is worth the benefit to security. I have always been very skeptical of the "but it'll hurt innovation" claim. Won't it promote innovation in new approaches for securing low-cost devices? It sure seems nebulous to me, but I am willing to be convinced otherwise.

During a review of W3C WoT & ETSI M2M standards I noticed that security is totally ignored in these tech-standardization bodies. The standards leave security as an exercise to the industry and the maker communities (who are not spending money on security until they have an problem). That said, it's also not trivial to implement something that at first sight seems straight-forward like 802.15.4 Security[0][1] without a deep understanding of the security architecture supported by the underlying platform:

Since the web is now getting "engaged" to the devices with CoAP and other protocols I wanted to create awareness of how bugs can spill over into the real world and do real damage there. If hacked insulin pumps or baby monitors don't scare you enough how about hacking a train? https://media.ccc.de/v/32c3-7490-the_great_train_cyber_robbe... ?? (everyone should probably watch this simply because SCADA strangelove guys are crazy and awesome)

Anyway to counteract the usually very "marketing intensive" tone of IoT groups on LinkedIn I decided to start this IoT Security group: https://www.linkedin.com/groups/4807429 it would be great to see people from all camps (IoT is a combination of 3 silos: 1) embedded, 2) web 3) infosec) actively contributing with technical topics in this group. I will keep it open to posts from marketeers but am heavily policing it for blogspam and remove any posts that are not security related).

Also I have some ideas about hackerspaces (http://hackerspaces.org/) which IMO every city should have and support. They're needed to propagate knowledge between these individual camps properly. (my contact details are in my profile in case you are interested to discuss more offline).

well, that was mostly depressing, but i found the part about mudge and the UL-like initiative encouraging:

"Peiter Mudge Zatko is a member of the high-profile L0pht hacker group who testified before Congress in 1998, and since he's gone on to head cybersecurity research at the Defense Advanced Research Projects Agency (DARPA) before joining Google in 2013. In June, Zatko announced he was leaving the search giant to form a cybersecurity NGO modelled on Underwriters Laboratories."

and above that, a section about a similar "consumer reports" style rating organization. that was also the first time i'd heard of the group i am the cavalry, which seems like a cool idea (in principle, at least, without really knowing much about the actual group).

and i understand this objection to that sort of approach:

"Its not the same quality problem... UL is about accidental failures in electronics. CyberUL would be about intentional attacks against software. These are unrelated issues. Stopping accidental failures is a solved problem in many fields. Stopping attacks is something nobody has solved in any field. In other words, the UL model of accidents is totally unrelated to the cyber problem of attacks."

it is a very different problem in a lot of ways, but that doesn't mean that an approach similar in spirit or presentation is doomed to failure. and i think it does fit into the broad category of messy consumer information problems that are hard to solve with specific detailed regulation.

I have a Denon receiver with a web interface, which can control everything over HTTP (volume, source selection, firmware, etc). Of course there's no CSRF protection, so anyone could just control my receiver by getting me to visit a page that tried POSTing to 192.168.0.XXX it would be trivial.

Most embedded engineer types know nothing about and never think about security. One example I saw once was an FTP server where the auth commands worked but were irrelevant. All commands always worked. It passed the unit tests therefore it was good.

Google also used to pay Mozilla to be the search engine there, until Yahoo outbid them.

It's amazing that Google search, which is quite useful, has negative market value as content. In the cable TV world, there are channels cable systems pay to carry, such as ESPN, and channels that pay to be carried, such as the Jewelry Channel. How did Google end up in the latter category?

Therefore the iPhone share of total search ad spend is about $3B. This link puts the overall market (inclusive of Android) at roughly $9B in 2014. That makes sense if you assume iOS to represent roughly 1/3 of devices.

Bing powers Siri, and I assume now powers all of the internet search functions (unsure about Safari.) So I wonder if Microsoft paid this amount, or if Apple decided that less money made more sense so they could harm their competitor?

In a world so abundant of information, i think user attention is indeed a significant resource every major player should fight for. I guess the distribution of attention follows power law, that a few entrances take up most of the mobile use cases. For example, I use Uber, Wechat, GMaps much more often than the other apps. Search bar definitely is one of the most critical entrances.

The intimidating thing for me was always how heavy blogging software was. Never really liked the idea of centralized hosting, but hosting some huge PHP blob with a database never felt like it was worth it. I'm hosting my own site now running Hugo and I love it. I agree that most people have moved to centralized hosting, but I'm seeing a resurgence of self-hosting with static site generators like jekyll, middleman or hugo. Things like static search[1] and static comments[2] are possible with some thought. Really neat and lightweight and with gitolite, I can keep my git repo containing the blog code on the server too, setup commit hook to rebuild the site and I'm maintenence free. I have some npm postcss scripts that build my scss, autoprefix it, etc and dump it into the assets for hugo to build from all in one go.

A lot of this is unncessary, I could just be using css. I like that there's not all this asset-flow magic built out, just simple npm with bash cli. Unix philosophy and very little heavy lifting. I think there's still hope.

It's amazing that the web, which was really originally built primarily as a distributed publishing platform, has gotten so damn complicated to publish to.

Right now I've got my own self hosted platform, running Wordpress on a Digital Ocean droplet. The constant security updates for Wordpress are a nightmare, and it seems I have to hack both my theme and my post code every time I want to make a slightly interactive post. Never mind that there doesn't seem to be a decent way to preview posts on mobile.

As others have mentioned, it seems the best way to get more people in control of their own platforms would be with easier static tools.

On that note, I've been really impressed with org-mode and pandoc. I've been writing and generating code within a text based environment lately, but it still feels as though the process hasn't really budged or improved much at all in the past 15 years. With org-mode and pandoc, along with babel, I can write and test code, embed images, and generate decent html/pdf all in one go.

But for the casual user, I think it's become more difficult to self publish over the years, not less. The tools we've built have gotten pretty embarrassing if our goal is to get as diverse of a population as possible speaking and sharing their ideas openly on the web.

Cheers to everyone still working on tools like org-mode, pandoc, and latex. It's still relevant, and it still does a great job. If you haven't checked them out, take a look. I was certainly surprised by how far these projects have been taken.

My problem with Medium is that it lends this amazing aura of credibility to everything that is published on it. I think they've hit on the design equivalent of the brown note (of South Park fame) which makes readers mentally incontinent vis-a-vis the credibility of the source of the actual text...

"There was a promising short lived moment where smaller, topic-oriented blog networks like Svbtle (amongst others) started appearing, but even those seem to have gone by the wayside and are increasingly being replaced by Medium."

Back in 2002 I co-founded a blogging company. At that time we were competing with the likes of Blogger.com and Typepad.com. There were many other companies, at that time, which I've since forgotten. At one point, around 2003 or 2004, we created a list of all our competitors, and there were at least 100 names on the list.

My point is, the vast bulk of all blogging has always been on 3rd party hosted blogging sites. Self-hosted blogging has always been rare. I self-host my blog, smashcompany.com, on a server at Rackspace, but this has always been a rare option.

All the same, I am intrigued by the question. If anyone has historical data on this, it would be fascinating to know when self-hosted blogs hit their peak. If Technorati.com has survived in its original form, then it would be in possession of this historical data, but sadly, the original Technorati.com is dead.

I'm embarrassed to say that I still hand write my blog directly in HTML using Notepad++ and manually FTP changes to the hosting company. Most of my blog is static HMTL, with a smidgen of script for analytics or occasional interaction. Every now and then I'll use some light PHP (typically when I need interaction with a database on the server).

Jekyll and Github Pages keeps the deployment simple and Google Domains has proven to be simple, cheap, and reliable. I tried Kloudsec out last week on a whim after seeing it on HN, and so far it's great - simple, free SSL with let's encrypt.

https://evancordell.com if interested. It needs a little more love before I'd really say I'm pleased with it, but I'm very happy with how cheap and easy it was to set up a personal blog with SSL.

I've been running a small hosting company since 2002. Started hosting just my blog, then friends, then their friends and so on. I used to have about 300 blogs total, now I'm down to about 250, and it's slowly dropping every month. A few of those people moved to other hosting, and kept the blogs, but really most of them just said "I'm giving up, no time to blog when I'm busy on Twitter and Facebook"

I think many people feel like they get out what they needed to get out on Twitter/Facebook. They used to write on their own blogs to get things out, now it's elsewhere.

I own a self-hosted blog and am actually in the process of deciding whether to transfer over to medium. To be honest I'm pretty much decided that I will, because it's just easier, not to mention I can save myself some hosting fees.

The key questions of the debate on the cons side of switching, assuming you're blogging for fun and not thinking particularly about advertising or massively customised SEO strategies, seem to be:

1. do I own my content2. will my content be accessible forever

As this post highlights, the answer to (1) on medium is YES. So, no problems.

The answer to (2) is also, for all practical purposes, YES, but you shouldn't depend on it.

But is this really such an issue anyway? I certainly assume that the vast majority back up their photographs, just by nature, and how difficult is it to back up the plaintext of your blog pieces too? If you have backups, and the answer to (1) is yes, then really, it starts to look like an easy decision.

I recently switched to Medium and couldn't be happier. With Ghost I was spending more time tweaking and maintaining purchased themes than I spent writing.

It's really really really fucking hard to run a blog that works well on desktop/tablet/phone and doesn't crash if you get a traffic spike. How many self hosted blogs can handle 500,000 hits in less than a day? Not many.

Medium will probably die someday. That's fine. I own my content and my content URLs. I'll simply port it to a new platform. It wouldn't be the first time.

I don't think that it's dead and self publishing I'd argue is easier and cheaper than ever. My blog has all the power of s3 for scaling with free SSL from Cloudflare and I pay peanuts for it. Current bill is $0.03 some months it gets closer to $1.

As a blog consumer, rather than producer, I also have reservations about Medium-esque sites, but from the opposite perspective.

There's already an infinite quantity of interesting content to read, and it seems reasonable to expect rising quantities of worthwhile, as I find writing and creations that I was unaware of when they were being made. With all this stuff, I want to be able to control where and when I read, and how I filter, manage, follow, and store all this stuff. At some point, platform operations reflect a business plan, and that plan may or may not allow for one or more of my preferences, for reasons of $. I guess I just prefer a relationship where a standard or pseudo-standard allows the user control, to select differing vendor options at the very least.

Then again, as I'm barely capable of managing a basic server install, I'm fully aware of why people throw in with hosted systems. I'm hoping for great things from stuff like Sandstorm.

I host my own blog (http://junglecoder.com) on a VPS at the moment. But I went rather overboard with it, as I built my own CMS-lite in go. I was in college, wanted to learn how the web worked at a decently low level (lower than wordpress or rails).

What I've discovered is that having a VPS opens up a world of opportunities for network related things. I've used that site to host Ludum Dare entries, ClickOnce .NET apps, and a Wiki Profile image that I used to see if anyone was looking at my page on a company wiki. An SSH tunnel has allowed me to bypass some firewalls that block the majority of ports.... I've learned a lot on that server. Some of the best $50/year that I spend in terms of hosting stuff.

I think the age of everyone having their own domain running their own code is starting to expire, just like the age of running your own email server. There's just too much spam and bad actors out there, you have to prove your site innocent to the big indexes before you can get anywhere. If you publish on Medium, Facebook, Blogspot, or another platform that has "rep", people assume the spam is filtered out by the platform, and they treat your content less skeptically.

Turns out AOL had the right idea the whole time -- people want platform-specific keywords and they want to trust the platform's caretakers to decide what's OK for them to see.

I "self-host" using Pelican + S3. It's super cheap (< $1/month) and pretty easy, the only real downside is that all of the Pelican themes are really ugly, and I'm not good enough at design to make a better one.

honest question, how did Medium become so dominant for all blog posting so quickly ? The design is no doubt beautiful but there is nothing special. Curious why so many bloggers magically decided to publish there all of a sudden. I see tons of tech posts there now.

The hard part is not getting a website to serve an HTML page. It's that modern UI standards for HTML publishing are pretty high. Finding a theme that you like and most other people will like (let alone writing one yourself) is a hassle for most people who aren't front-end developers.

You can point to lots of web sites that are hard to read, but that just proves the point that people are rather finicky about it these days.

Getting hosted by someone else is incredibly convenient. They take care of all the work of maintenance, security, reliability, and even give you tools to increase your visibility on the web and design your blog. IMHO only hobbyists or people with a very good reason should self-host. If you don't like one company's terms, look into the many other blog providers out there.

At sunsed.com we are 100% dedicated to create the best blogging (and in the near future, a full CMS) platform. We hope to re-energize the world of self-publishing with a managed solution that lets you import/export to any other CMS/Blogging platform!

Right now we are working on an IDE inside SunSed so anyone can create their own template with HTML++ (our own templating/programming language).

I use and highly recommend hexo.io with it's S3 deployment plugin. It's as good as Jekyll but easier to modify and theme if you come from a web dev background as it's written in NodeJS rather than Ruby.

I went from years of using blogger.com to trying Wordpress for a few months. Then I switched to Jekyll and statically generated blog articles. In the end I went back to blogger.com because I figured that if I needed a 3rd party like discus for comments, then I might as well use blogger.com

I am working on a project that combines DNS, WWW and WebDav servers to simplify blogs self-hosting, so your blog can always sit connected on a mapped network drive and to add new website, you can just create a directory with new domain name. https://github.com/parkomat/parkomat

I used Docker recently to host my own blog, in an attempt to put ads on it and whatever Wordpress extensions I wanted. Despite getting a healthy number of views (nothing extraordinary) the income was basically 0. It just wasn't worth the candle.

I've been looking for intelligent conversation online for over 25 years. For a time it was Usenet. I mostly missed the Well, though I caught mailing lists, Slashdot, and for a brief moment, G+ (it's still there, and I've cultivated a useful community, though the reach is small).

The methodology uses the Foreign Policy Top 100 Global Thinkers list as a proxy for "intelligent discussion", the string "this" to detect English-language content, generally, and the arbitrarily selected string "Kim Kardashian" as a stand-in for antiintellectual content. Google search results counts on site-restricted queries are used to return the amount of matching content per site, with some bash and awk glue to string it all together and parse results.

As expected, Facebook is huge, as is Twitter. When looking at the FP/1000 ratio (hits per 1,000 pages) KK/1000, and FP:KK ratios, more interesting patterns emerge.

Facebook beats G+, largely.

Reddit makes up in quality what it lacks in size, but Metafilter blows it out of the water. Perhaps a sensible user filter helps a lot.

The real shocker though was how much content was on blogging engines, even with a very partial search -- mostly Wordpress and a few other major blogging engine sites. Quite simply, blogs favour long-form content, some of it exceptionally good.

But blogs suck for exposure and engagement.

This screams "Opportunity!!" to me. I've approached several players (G+/Google, Ello) with suggestions they look into this. Ello's @budnitz seems to be thinking along these lines (I'm a fan of what Ello's doing, but its size is minuscule, and mobile platform usability is abysmal.)

One of the most crucial success elements for G+ is the default "subscribe to all subsequent activity on this post" aspect. Well, that and the ability to block fuckwits (though quite honestly ignore would be more than sufficient). There's a hell of a lot else to dislike, but those two elements are crucial to engagement.

I don't follow the author's complaints about terms and conditions. I suppose language like "we can change these terms any time and your use of the site constitutes acceptance" sounds ominous at a naive level, but what's the alternative? It would amount to some form of preventing users from using the site until they click "agree", and then doing that again every time the T&Cs change, right?

Half the time I see Medium posts, the other half I'll see something hosted with Jekyll + github pages. Which technically isn't Self Hosted, but still quite different from just writing in Medium or something of the sort.

However I suspect HackerNews readers are not the average, and I do think there's a down trend on self hosting blogs, versus using Medium/Wordpress/Tumblr or even Blogspot.