The author draws a hard distinction between Udacity/Coursera MOOCs (good) and traditional master's degrees (bad). I'll interject that with Georgia Tech's Online Master's in Computer Science program [0], which is delivered via Udacity and insanely cheap [1], you can get the best of both! (Their "Computability, Complexity and Algorithms" class is one of the top Udacity courses cited in the article.)

Keep in mind that a traditional degree program does have a huge advantage over a strict MOOC: accountability. It sounds good to say that anybody can go push themselves through one of these courses. Try pushing yourself through ten, and actually writing all the papers and implementing all the code, while working full time and having a family. That grade looming at the end of the semester really does wonders for your motivation. Plus you can get help from live professors and TAs, and the Piazza forums for OMSCS are full of smart, curious students who love talking about the subject at hand. There's a richness to the degree experience that I don't think you get with scattered classes.

"Whether passing an algorithmic technical phone screen means youre a great engineer is another matter entirely and hopefully the subject of a future post."

This sentence plus the inverse correlation between experience and "interview performance" shown there. Makes a big smell about how biased are those interviews to themselves and not to real technical interviews.

From the data it looks like the questions asked using that service are the ones you might learn in university and after many years not using them, that knowledge fades away because you're not using it.

This is reinforced by MOOCs being the 101 of the subject they're dealing with. It would be interesting to see if there are trivia questions from 101 courses.

The most obvious bias is in the clickbait title. Those 3K interviews are in a specific platform, meaning they're done in a specific way.

So after checking their results it seems that interviews done using that service benefit people with fresh university or 101 lessons knowledge.

What worries me more is the lack of improvement and perhaps the moral superiority of ending the article with a "these findings have done nothing to change interviewing.ios core mission". It feels like the entire statistics game shown there was to feed back what they already knew.

Thanks for writing this Aline. As a recruiter for almost 20 years, I wish I had access to all my data and then the time to compile it, and anecdotally I'd expect the finding about MOOCs would be similar.

The most selective of my hiring clients over the years tended to stress intellectual curiosity as a leading criterion and factor in their hiring decisions, as they felt that trait had led to better outcomes (good hires) over the years. MOOCs are still a relatively recent development and new option for the intellectually curious, but it's not much different than asking someone about the books on their reading list.

Unfortunately, demonstrating intellectual curiosity often takes up personal time, so someone with heavy personal time obligations and a non-challenging day job is at a significant disadvantage. One could assume that those who have the time to take MOOCs also have time to study the types of interview questions likely favored by the types of companies represented in this study.

I am perplexed why anyone would think that interview performances has any interesting statistical relevance. Much more interesting would be how successful the candidate was after receiving a job at the company.

* There is high effect (which I assume is correlation?) between performance and being at a top company, with less effect from top school. How far out of school is the interviewee? How far out of the top company is the interviewee? The time impact is probably a confounder.

* Years of experience has no effect. This may be due to survivorship bias, where the top potential performers don't need to do interview practice on their site.

* Speaking of having no effect, there is no such thing as "not achieving significance"... I'd rather see the estimated effect size with error bars. Is the "founded a startup" listed at zero because it is at 0.05 +/- 0.10 effect, or 0.40 +/- 0.50 effect?

* There is mention of Coursera/Udacity having a huge effect, but not when coupled with top company. There is some speculation as to why, but it leaves out some other possibilities that can be easily tested. For example: are the people who don't take Coursera courses and are not from a top school significantly worse than everyone else?

I remember when I graduated from a "Top School" and interviewed at "hot startups" from the valley. I aced a lot of the interviews - why? Because I had just taken classes on LinkedLists, Binary Trees, HashMaps, etc... So when they asked me to whiteboard a "shortest path algorithm" it was just rehashing what I did in school.

Years later, looking back, I fail to see the relevance in most of the technical questions. In fact, if I had to do those questions over again today I would probably fail miserably. Yet, I have been in the industry for a while now and have worked with countless more technologies and have accomplished far more than my younger self.

Just because someone performs well in a technical interview doesn't mean they will do a good job. That is the data that really matters. I've interviewed hundreds of candidates as a hiring manager for some big startups, and from my experience technical interviews are not a great indicator of success.

I'm saying this coming from someone who has gone to a "Top School" and done multiple Coursera/Udacity/etc classes.

Yes, someone might be able to whiteboard a random forest or write a merge sort, but do they know how to engineer a system? Can the candidate:

> Communicate well with others in a group?

> Solve unique technical problems?

> Research and learn new technologies effectively?

> Understand how to push back to product owners if there's scope creep?

etc...

These are all things that are not really analyzed in many technical interviews.

As I'm reading this analysis all I can think of is that it is pretty useless - if not dangerous for the industry.

What I've found is that it is critically important that someone knows how to code at some basic level. But their ability to code and explain algorithms on the fly, while probably relevant in academia/research, is such a minor part of the day-to-day of a programmer - At least from my experience.

Interesting bit on the MS degree. I followed the link, and I'm not quite as surprised that the correlation is poor, or even negative, given the way the data was collected and analyzed.

Absolutely agree that some MS degrees are pretty much less rigorous cash cows by now, that allow students to skip the fundamentals such as data structures, operating systems, and compilers.

However, many CS MS degrees actually do require this as a background, to the point where some programs have emerged to prepare non-CS majors for MS degrees, kind of like those post-bac premed programs. It's hard to believe that those MS degrees, which require a decent GPA in those core courses, along with high GRE scores (sorry, but we are talking about interviewing skill, which may be more related to exam taking ability than job performance), wouldn't result in a similar profile to people with CS degrees from top schools.

This is fully acknowledged in the text of the article referenced in a link, but unless people follow it, I do think the message may be a bit misleading.

That's an aside, though. The value may very well be in the prep for these degrees (ie., the post-bac CS coursework required for admissions to a reputable MS program). If you can get that through online courses (udacity or coursera) through genuinely rigorous self-study? Yeah, that might do it, for far less money. I've audited a few of them, and they're the real deal, that's the real coursework there.

Not to harp on the "technical interviews are disconnected from actual work!" angle too much, but I'm reminded of a comment from a thread about the creator of Homebrew failing a Google interview. Someone pointed out that it goes to show that it's possible to create widely-used software without an intimate knowledge of CS. I wonder if that's a disconcerting fact for some employers to grapple with.

Until recently I worked at a startup as Machine Learning Engineer/Data Scientist. There I got some experience interviewing people and looking at their resumes. In my experience, which is very limited compared to this post, people who put an MOOC on their resume are usually less qualified compared to people who don't.

There is nothing wrong with MOOCs, but they are almost always beginner-level. If you put them on your resume it kindof implies you don't have a lot of experience beyond that. Putting the Coursera Machine Learning course on your resume would be the equivalent of putting Java 101 on your resume for a Software Engineer.

I would recommend anyone to put projects on your CV instead. Even if you don't have a lot of work experience, just put side-projects and school projects on there.

Interesting and surprising, especially the experience thing. I think I am a significantly better engineer than earlier in my career, so I assumed experience would count for a fair bit. Then again I have inherited projects from experienced guys who make crap high level architecture decisions and the code is way more difficult to work with than it ought to be.

But then this article seems to be measuring interview performance, not actual ability on the job. So is any of it actually relevant at all?

I wonder if "took courses" could be a stand in for "prepared heavily". It seems like people with all the other attributes might think they didn't need to study. People without them might think they did and took courses to "catch up". In my experience, preparation is the key driver of performance in these types of interviews.

It seems reasonable that a person who took a MOOC might have prepared in other ways as well while people who didn't probably didn't prepare much at all (since watching a few Algo lectures seems the most accessible refresher.)

I think you are seeing the effect of people who have decided for themselves to pursue lifelong learning. The Udacity/Coursera thing just clusters these people in a way that you notice them in the stats. But remember that statistics do lie. You need to dig into the reality behind the numbers, and question whether you are measuring all the right indicators.

My experience comes from several decades developing software and from time to time, hiring people. The people that worked out best, either as colleagues or hires, always seemed to be learning new things and were ahead of the curve trying out new techniques or tools before they became popular.

If you understand how a tool/technique becomes popular as the mass of software developers wrestle with new problems and finally find a way to master them, then it makes sense that constant learning makes some people stand out of the crowd. They happen to be the first ones to learn the new tool/technique and if they do not introduce it to their development team, then when management does make the decision to introduce it, the folks who know how to drive it have a chance to excel and appear to be rocket scientists.

Searched the article and the comments here for "Pluralsight", with zero hits. So what makes Udacity/Coursea preferable? TLDR, I'm asking this as Pluralsight was a significant contributor to my landing my latest role after redundancies.

The long version: I recently landed a role after some time off, having changed from mainly back end Php/Coldfusion to C# in the last year. I was able to make the switch in my last role. For me, moving to C# was a big transition; as well as guidance from a (fantastic) mentor, I used Pluralsight to learn C#, asp.net and DDD - e.g. from Jon Skeet, Scott Allen and Julie Lerman, to mention but a few.

Being completely burnt-out on the old stacks, I was set on making my next role a C# one. I've come to love what Microsoft are doing with Core, open sourcing etc, as well as the strictly typed C# language and ability to use NCrunch with live unit tests. So I signed up for a year after relinquishing my corp subscription, kept doing their courses, and found the training material highly accessible with great quality content. Each interview was a learning process, when I didn't know something from a test, I'd go and study it so that I'd be better prepared for the next role. One of these was the study of data structures and basic computer algorithms, where I was lacking. I might not have had years of experience, but the experience I had was mostly best practice.

During my search, I typically got great feedback on the fact that I was doing Pluralsight courses, and it was a significant factor in being hired for the new role - it showed cultural fit, in addition to passing their tech tests (which happened to involve structures). My company had interviewed a lot of candidates, struggling to find the right talent. Just possessing technical skills is one thing, having the right attitude towards learning is another.

At any rate, I'll keep using Pluralsight to raise my proficiency in my new stack - even as an old timer, I am having a newfound level of enthusiasm towards my whole profession which I haven't felt since I coded in assembly on the good old Amigas. I would be interested in knowing why Coursera / Udacity might be better or more accepted in the marketplace though.

1. You have an undergrad degree in liberal arts2. You pay as little tuition as possible3. You take no time off and continue to work FT

These apply to me -- my undergrad was in English, I paid 6k total (27% of the 21k total cost) and went to school at night over 4 years while my career continued to progress.

Most of the people in my program couldn't write a FOR loop if their life depended on it, they viewed it (incorrectly) as a jobs program while the school needed the $$ to keep the dept afloat, so I'm not surprised they fared poorly in technical interviews.

But that doesn't mean the degree isn't useful. If you're already a programmer, it helps get your foot in the door at many places. HR managers/recruiters feel more confident forwarding on your rsum, they can't parse your GitHub repos.

The degree is icing on the cake, it's not going to magically turn you into the Cinderella of Programming if you have no real-world experience. I got my master's with a QA and a paralegal and today? They're still a QA and a paralegal.

That being said, timed technical interviews are almost universally asinine, IMHO. When in real life do you have 10 minutes to figure out a problem? Or are prevented from Googling the answer? The measure of successful programmers is how efficient and professional they are in problem solving, not how much useless information they can keep in their head.

Things I've never had to do in 'real' life:-Never had to split a linked list given a pivot value-Never had to reverse a string or a red/black tree-Never written my own implementation for Breadth First Search

etc etc

Personally I'd rather see take-home assignments that roughly approximate the type of work you'd do, which in my career has been churning out new features or applications. Does knowing the time-complexity of radix sort vs heap sort really have a material impact on your effectiveness as a programmer? No.

This is a very poorly done analysis. At a minimum she needs to define top school/top company. Also I'd like to see the confidence intervals around the effect sizes. In addition, looking up MOOC information from LinkedIn may result in a lot of false negatives. (She doesn't mention if MOOC courses in non-CS subjects count.) Did all the interviewees have CS degrees? What about the Masters degrees, is she including non-CS ones? Is the sample of interviewees representative or there's any selection bias that we should be aware of?

A study which doesn't answer so many basic methodological questions is garbage.

On the master's front, I went down a slightly unusual path. I enrolled in a master's program in music technology at NYU [1]. I already had a master's in engineering from Princeton [2], but after time away from the software world, I wanted to retool for a return to engineering, but with a focus on applications that actually mattered to me.

It turned out to be a very expensive, but very fulfilling decision, and it paved a route for a very successful past four years.

Compared to my first master's, it was less theoretical and much more project-based. In that sense, it was fantastic preparation for career work, because every semester, I had to conceptualize and ship 4-5 different projects in all sorts of subject areas. The value of that shouldn't be underestimated. It also directly led me to cofounding a startup that had a brief lifetime, but effectively converted me to a full-stack engineer.

Today, I don't use much of the subject matter I learned in my day-to-day, but I draw on the creativity, problem-solving skills, and work patterns every day.

My Princeton program was great too, but I thought I'd share about the NYU program, as that was the more outside-the-box choice. There's something special to be said for a master's degree, when it's interdisciplinary and let's you focus on the intersection of engineering skills and subject matter expertise.

We really need a further correlation between people who pass the interviews and job performance a year later. I do a lot of interviewing at my current job and we have found no strong correlation at all between CS skills and actual ability to "get things done".

We toned down the CS type questions since they tend to take too long. We still ask a few basic tree and string manipulation questions to weed out the people who have no idea how to program and get insight into how the person thinks.

I still feel at the end of the day we could flip a coin on accepting an interview candidate once they have shown basic competency and have the same results.

I have been telling candidates that a public github repo with a nice commit history carries much more weight with me then a CS degree since we have been burned so many times before.

For people who attended top schools, completing Udacity or Coursera courses didnt appear to matter. (...) Moreover, interviewees who attended top schools performed significantly worse than interviewees who had not attended top schools but HAD taken a Udacity or Coursera course.

Possible explanation might be that people going through regular degree typically spread themselves thin over many subjects (digital electronics, compiler design, OS theory, networking etc) while MOOC folks sharply focuses on exactly the things for interviews (i.e. popular algorithms). Its like interval training for one specific purpose vs long regime for fully rounded health. The problem here is not academic system but how we measure performance in interviews. I highly doubt if results would be same if interviewers started asking questions from all these different subjects instead of just cute algorithm puzzles.

If you know me, or even if youve read some of my writing, you know that, in the past, Ive been quite loudly opposed to the concept of pedigree as a useful hiring signal. With that in mind, I feel like I owe clearly acknowledge, up front, that we found this time runs counter to my stance.

Did the interviewers have access to the applicant's resume? If so, to what extent do these results simply reflect the interviewers' bias for top schools and famous companies?

While I do think that interviewing is broken, I would love to see the raw data with this. For example, did Udacity courses have other related traits associated with them, IE: did these candidates that also have a certain number of years of experience, degree etc.? 3000 is a small sample size and I'm wondering if there is some sampling bias here.

I conducts lots of tech interviews for SWE positions, and as everybody's boning up on algorithmic trivia, I've learned that I can get a stronger hiring signal by asking simpler questions that people with an aptitude for programming will succeed on and people with an aptitude for memorizing the implementations of algorithms will not.

(Simple example: given two closed intervals [a..b] and [c..d], how do you compare the four values to determine whether or not the intervals overlap? You may laugh, but it defeats about 50% of candidates in the first minute of an interview because they just don't understand simple relationships and Boolean expressions.)

One of the reasons I got hired by Airbnb is that I took MOOCs, but I also believe that most of my knowledge comes from reading books and that's a thing I didn't put on my CV. So, even if showing interest in learning opens for you a huge amount of opportunities, I think you actually have to go deeper than just enrolling on a couple of MOOCs.

Something I wonder is how the participants in these interviews were selected from the general population of job candidates. Painting with a broad brush, the best workers might not even be candidates, because they've already been hired. And the best candidates might be the least likely to seek coding interview practice.

I thought the most interesting finding was that completing Udacity or Coursera courses on programming/algorithms (for non-top school graduates) was highly predictable of strong interviewing performance.

Basically a "bad" programmer that can't write maintainable code that prepares for technical interviews by brushing up on algorithms and whiteboard style questions will do better than a very good programmer with lots of years experience.

1. Go to college: a. spend many semesters in lectures all of which tangentially brush upon the final exam based on the whims of the lecturer. b. cram for final exam last minute panic to crunch memory according to advice on content which was brushed upon during lectures. 2. Interview for job: a. cram for interview by going to coursera to crunch memory according to interview memes based on the whims of the interviewer. b. spend the rest of term of employment exercising skills which tend to be tangentially brushed upon during both interview and schooling while the majority of actual tasks are often googled and stack-overflowed into place based on arbitrary design decisions and politicized stack choices. 3. Results: a. good interviewees have learned appropriate memes to reassure interviewers. b. good students have learned obligatory cruft to reassure professors. c. actual necessities are tangential to many or most entry barriers.

When you are interviewing for a specialist post (and most posts are specialist to some degree) you are looking for evidence that the candidate can do that particular job. Therefore a course that indicates that they have the particular skills required is highly desirable!

Huh, I hadn't bothered to list MOOCs on my resume since I didn't think employers would be interested, maybe data like this will make employers more interested in the courses, which would probably get more people to shell out for the certificates.

Very unsurprising for me. You are measuring your ability to solve algorithm puzzles. Most engineers don't actually do many algorithm puzzles in day-to-day work, especially the types of algorithms that interviews tend to focus on like sorting and dynamic programming. So "years of experience" is not measuring experience in what you're actually being tested on. On the other hand, you do exactly those types of things in many CS classes, and in Coursera classes, algorithms are exactly what you practice. So it makes sense it correlates.

Top company is a predictor for the obvious reason - it's selection bias for people who already passed those interviews at the company. You're not good at the interview because you worked at the company, you work for the company because you're good at the interview.

Master's degrees seem like largely for international students needing visas, career switchers, etc so not surprised they are not a strong predictor. And if anything the course material moves past the intro data structures stuff the whiteboard interviews tend to test.

The only huge surprise for me here is that Coursera is a stronger predictor than top company and top school. I would have predicted top company > top school > Coursera.

The post that I would be much more interested in is correlating performance reviews to interview performance. That gets suggested as a possible future post.

Sad to see so much detail paid to the data, and so little to the setup of the experiment itself.

It shouldn't be surprising that an online technical screen favors candidates who've participated in a MOOC, but is blind, say, to years of experience. A screen like this is timed-performance-at-a-distance, which resembles MOOC participation. The full spectrum of qualities that comprise a Good Hire might incorporate the other signals from the post, but this type of interview won't test them.

(I'll be the first to admit I'm biased against performative coding in engineering interviews. Tech screens like this are often necessary, though, so they have their place.)

Is this guy a paid shill for academic friends trying to boost enrollments and overcome the disillusionment of the younger people who realize too much emphasis is placed on academics and not enough on practical application?

The world needs more vocational schools and trade schools and technical schools than it does colleges and universities.

You should also check out https://reddit.com/r/babyelephantgifs. We just finished a fundraiser for David Sheldrick Wildlife Trust (an elephant orphanage in Kenya) in collaboration with the UK branch of the organization. But you can still donate!

The situation is really bad, if the next 10 years are as bad as the last 10, elephants will be basically extinct in the wild. This will have wide reaching consequences as elephants are keystone species which means that they are extremely important for their environment. If they go away, ecosystems will collapse which will cause further unrests in the general region.

This is really great news! But we can go even further, let's continue with banning Rhino horn markets (would we call it keratin markets rather than ivory?). Keratin would be primary component here, the same protein that makes human fingernails also is what composes Rhino horns.

In Vietnam and some of China there are some who cling to a belief that eating the Rhino horn will cure/prevent cancer or increase libido. Those who believe this probably aren't aware that eating their fingernails would have the same effect.

Woot! Thank you China! Elephants are my favorite animal and it's sad to see such magestic creatures slaughtered for such nominal things. Or any animal for that matter. Shark fin soup? Seriously what a waste of a needed predator.

Heartbreaking documentary on the massive killing of elephants for ivory and how futile all of the efforts the African nations are taking to try to stop poaching and killing by local farmers.

Investigative segments include a Chinese journalist undercover with WildLeak talking to the Chinese criminals involved in the massive illegal ivory trade.

I hope this was instrumental in getting the Chinese govt to actually set a ban date for ivory. Currently their regulation are so lax and so corrupt that it's easy for the "legal" ivory dealer to launder illegal ivory in order to sell millions of dollars worth every day.

Did you know over 1000 Kenyan and other African game rangers have been kill in the protection of elephants by poachers. Terrible.

We kill millions of pigs every year (intelligent animals) and farm millions of cows (Extinct many many animals through environmental destruction through use of large amounts of land) but we expect China to care about the second hand effects of the ivory trade.

Where they still have millions living in poverty, we are rich (speaking as a westerner who eats meat, not all of HN)

We love to be racist don't we?

I guess if we convince the Chinese the Rape of Nanking was cows and not elephants they can become like us.

I'm a lowly ancient Java programmer and I think Rust is far far more than safety.

In my opinion Rust is about doing things right. It may have been about safety at first but I think it is more than that given the work of the community.

Yes I know there is the right tool for the right job and is impossible to fill all use cases but IMO Rust is striving for iPhone like usage.

I have never seen a more disciplined and balanced community approach to creating PL. Everything seems to be carefully thought out and iterated on. There is a lot to be said to this (although ironically I suppose one could call that safe)!

PL is more than the language. It is works, community and mindshare.

If Rust was so concerned with safety I don't think much work would be done on making it so consumable for all with continuous improvements of compiler error messages, easier syntax and improved documentation.

Rust is one of the first languages in a long time that makes you think different.

I think Rust is mostly about safety in the same way that skydiving is mostly about safety. Having safety features that you know you can rely on allows you to take risks that you normally wouldn't in order to accomplish some really awesome things.

(I guess in this analogy C is a parachute that you have to open manually, while Rust is a parachute that always opens at exactly the right altitude, but isn't any heavier than a normal parachute.)

> Safety in the systems space is Rust's raison d'tre. Especially safe concurrency (or as Aaron put it, fearless concurrency). I do not know how else to put it.

But you just did! That is, I think "fearless concurrency" is a better pitch for Rust than "memory safety." The former is "Hey, you know that thing that's really hard for you? Rust makes it easy." The latter is, as Dave[1] says, "eat your vegetables."

I'm not advocating that Rust lose its focus on safety from an implementation perspective. What I am saying is that the abstract notation of "safety" isn't compelling to a lot of people. So, if we want to make the industry more safe by bringing Rust to them, we have to find a way to make Rust compelling to those people.

I was surprised to see Ada in the list of unsafe languages, since it always was sold to me as being designed for safety. A bit of searching leads me to believe that Ada is better about memory even though it mostly uses types for safety, and better enforcement of bounds on array access should solve overflow issues regardless. Am I missing something?

If you look at Rust from C then the point is safety, but if you look at it from the other direction, e.g from F# then what attracts you is that you will get the same safety guarantees (and perhaps a few more) but without the GC and heap overhead.

However, I feel that Steve Klabnik is trying to dispel myths about Rust not being anything "but" safety, to shape how other Rust developers talk about Rust, not denying that Rust's central purpose is around being a safe language.

This is because there is a lot of miscommunication about Rust. A lot of people who aren't immediately sold on the language walk away thinking it's slow (it's not), it's complicated (not really), and not production ready (it actually is). And that's because Rust developers don't know how to talk about Rust. I am guilty, for one.

Since Steve is such a huge part of RustLang development, it's his duty to direct the conscious effort to promote the language.

The issue with safety is that nothing is really safe.Once you have some level of safety in your programming language, you realize that there are still a lot of other sources of hazard (hardware errors, programming logic errors etc.)

So I guess, it would be better to say that Rust is about decreasing unsafetyness or whatever the correct word for that is.

edit: since I see posts about Go, this is evidently another approach toward decreasing unsafetyness by providing fewer and easier to understand primitives so that the programming logic is harder to write wrong. It might come at a moderate cost for some applications.

I do not mean to pick on C++: the same problems plague C, Ada, Alef, Pascal, Mesa, PL/I, Algol, Forth, Fortran ... show me a language with manual memory management and threading, and I will show you an engineering tragedy waiting to happen.

I think if programming is to make progress as a field, then we need to develop a methodology for figuring out how to quantify the cost-benefit trade-offs around "engineering tragedies waiting to happen." The fact that we have all of these endless debates that resemble arguments about religion shows that we are missing some key processes and pieces of knowledge as a field. Instead of developing those, we still get enamored of nifty ideas. That's because we can't gather data and have productive discussions around costs.

There are significant emergent costs encountered when "programming in the large." A lot of these seem to be anti-synergistic with powerful language features and "nifty ideas." How do we quantify this? There are significant institutional risks encountered when maintaining applications over time spans longer than several years. There are hard to quantify costs associated with frequent short delays and lags in tools. There are difficult to quantify costs associated with the fragility of development environment setups. In my experience most of the cost of software development is embodied in these myriad "nickel and dime" packets, and that much of the religious-war arguing about programming languages is actually about those costs.

(For the record, I think Rust has a bunch of nifty ideas. I think they're going down the right track.)

The original Rust author make great points about safety. I think this new thrust on marketing emerges from Rust Roadmap 2017 which puts Rust usage in industry as one of the major goal. Currently Rust is about Go's age but nowhere close in usage. As the roadmap says "Production use measures our design success; it's the ultimate reality check." I agree with that.

The author states: "A few valiant attempts at bringing GC into systems programming -- Modula-3, Eiffel, Sather, D, Go -- have typically cut themselves off from too many tasks due to tracing GC overhead and runtime-system incompatibility, and still failed to provide a safe concurrency model."

I think Rust is not about safety, but about reusability. Do you like to take on a dependency on someone's code when it is in C? The answer is: roll your own code. Rust means the end of that.

Rust means software that can be written once and used "forever". Thus it enables true open source. In comparison C/C++ pay a mere lip-service, by also giving you, along with the code, lots of reasons to worry.

I think it's a bit funny that in an industry that (supposedly) prides itself on "meritocracy", there are many people that refuse to use (or learn) performant memory-safe languages, when memory-safe code is always better than memory-unsafe code (in terms of resource usage, reduction of bugs, etc, etc.).

"Our engineering discipline has this dirty secret, but it is not so secret anymore: every day the world stumbles forward on creaky, malfunctioning, vulnerable, error-prone systems software and every day the toll in human misery increases. Billions of dollars, countless lives lost."

Billions of dollars and countless lives lost? I'm not saying that buffer overruns aren't a thing but this seems like marketing claims without substance. Yes, I read through the examples below, still think he's overstating it.

What about bare-metal options? Is there any development effort in that direction?

Most of the C that I do these days is Arm Cortex-Mx work. Realtime cooperative multi-tasking using an RTOS on the bare metal. It seems like Rust would be a great option for that kind of work if the low-level ecosystem were complete enough.

I completely agree. This is what I wrote on Reddit in response to Klabnik's post:

Rust can make such an important contribution to such an important slice of the software world, that I really fear that trying to make a better pitch and get as many adopters as quickly as possible might create a community that would pull Rust in directions that would make it less useful, not more.

Current C/C++ developers really do need more safety. They don't need a more pleasant language. Non C/C++ developers don't really need a language with no GC. Now, by "don't need" I absolutely don't mean "won't benefit from". But one of the things we can learn from James Gosling about language design is, don't focus on features that are useful; don't even focus on features that are very useful; focus on features that are absolutely indispensable... and compromise on all the rest. The people behind Java were mostly Lispers, but they came to the conclusion that what the industry really, really needs, is garbage collection and good dynamic linking and that those have a bigger impact than clever language design, so they put all that in the VM and wrapped it in a language that they made as familiar and as non-threatening as possible, which even meant adopting features from C/C++ that they knew were wrong (fall-through in switch/case, automatic numeric widening), all so they could lower the language adoption cost, and sell people the really revolutionary stuff in the VM. Gosling said, "we sold them a wolf in sheep's clothing". I would recommend watching the first ~25 minutes of this talk[1] to anyone who's interested in marketing and maintaining a programming language.

If Rust would only win over 10% of C/C++ programmers who today understand the need for safety, say, in the next 5-10 years, that would make it the highest-impact, most important language of the past two decades. In that area of the software world change is very, very slow, and you must be patient, but that's where Rust could make the biggest difference because that's where its safety is indispensable. A few articles on Rust in some ancient trade journals that you thought nobody reads because those who do aren't on Twitter and aren't in your circle may do you more good than a vigorous discussion on Reddit or the front page of HN. Even the organizational structure in organizations that need Rust looks very different from the one in companies that are better represented on Reddit/HN, so you may need to market to a different kind of people. So please, be patient and focus your marketing on those that really need Rust, not on those outside that group you think you can win over most quickly because they move at a faster pace.

I haven't used Rust, but generally speaking wouldn't you say that safety in some sense includes the other nice features? For instance, if it were safe but not fast (like, say, a GC language) it wouldn't be useful. So it has to be safe and fast (which it sounds to me like it is). Okay, so what if it were safe, fast, but a real hassle to use? Well that's not very useful either. So it has to be safe, fast, and usable. Just like Moxie's approach to security: focus on usability, so people actually use the damn thing. And it sounds like all the other nice features make Rust more usable.

I have been anxious about using Rust as webserver. But so far there is no mature framework that I think can use. I had a look at few like mio and iron framework etc. It has no mature Websocket implementation or an http package mature enough to be used in production. I am looking forward to make an ultra efficient PubSub server that supports HTTP poll and Websockets. Hope my dream comes true :)

I think that safety is often doing small things clearly. When you read about thread safe computing, you end up with many rules FP make impossible. So even it's mostly safety, it encompasses a larger area in disguise.

As someone looking at this influx of discussion from the point of view of a curious bystander, I can't help but be annoyed by two persistent misconceptions that keep being perpetuated in many statements of this kind.

1) Memory safety is or should be a top priority for all software everywhere. The OP goes so far as to state: "When someone says they "don't have safety problems" in C++, I am astonished: a statement that must be made in ignorance, if not outright negligence."

This is borderline offensive nonsense. There are plenty of areas in software design where memory safety is either a peripheral concern or wholly irrelevant - numerical simulations (where crashes are preferable to recoverable errors and performance is the chief concern), games and other examples abound. It's perfectly true that memory safety issues have plagued security software, low level system utilities and other software, it's true that Rust offers a promising approach to tackle many of these issues at compile time and that this is an important and likely underappreciated advantage for many usecases. There's no need to resort to blatant hyperbole and accusations of negligence against those who find C++ and other languages perfectly adequate for their needs and don't see memory safety as the overriding priority everywhere. Resorting to such tactics isn't just a bad PR move, it actively prevents people from noticing the very real and interesting technical properties that Rust has that have little to do with memory safety.

2) Rust is just as fast or faster than C++.

Rust is certainly much closer to C++ in performance than to most higher level interpreted languages for most usecases and is often (perhaps even usually) fast enough. Leave it at that. From the point of view of high performance programming, Rust isn't anywhere close to C++ for CPU-bound numerical work. For instance, it does not do tail call optimizations, has no support for explicit vectorization (I understand that's forthcoming), no equivalent to -ffast-math (thereby limiting automatic vectorization, use of FMA instructions in all but the most trivial cases, etc.), no support for custom allocators and so on. I'm also not sure if it's possible to do the equivalent of an OpenMP parallel-for on an array without extra runtime overhead (compared to C/C++) without resorting to unsafe code, perhaps someone can correct me if it's doable.

Over the past week or so, motivated largely by a number of more insightful comments here on HN from the Rust userbase, I've tried out Rust for the first time, and found it to be quite an interesting language. The traits system faciliates simple, modular design and makes it easy to do static dispatch without resorting to CRTP-like syntactic drudgery. The algebraic/variant types open up design patterns I hadn't seriously considered before in the context of performance-sensitive code (variant types feature in other languages, but are usually expensive or limited in other ways). The tooling is genuinely excellent (albeit very opinionated) and easily comparable to the best alternatives in other languages. I'm not yet sure if I have an immediate use for Rust in my own projects (due to the performance issues listed above and easier, higher level alternatives in cases where performance is irrelevant), but I will be closely following the development of Rust and it's definitely on my shortlist of languages to return to in the future.

However, I would have never discovered any of this had I not objected to the usual "memory/thread safety" story in a previous HN discussion and received a number of insightful comments in return. I think focusing on the safety rationale alone and reiterating the two hyperbolized misconceptions I listed above does a real disservice to the growth of a very promising language. I think Steve Klabnik's blog post to which the OP responds is a real step in the right direction and I hope the community takes it seriously. Personally, I know a few programmers who've entirely ignored Rust due to the existing perception ("it's about memory safety and nothing else") and in the future I'll suggest Rust as worthy of a serious look as an interesting alternative to the prevailing C++-style designs. I'm certainly glad I tried it.

Even if Rust adds increasingly more "unsafe" features in order to appeal to new developer groups, I agree that it should remain a "100% safe by default language", and they should continuously try to improve the performance of the safe code, rather than get lazy and say developers can just use the unsafe syntax if they want 3x the performance. This would only lead more and more developers to increase the usage of unsafe code. It would be even worse if Rust would allow unsafe code by default for any future feature.

The right granularity for error handling is important, as well as making it easy to handle (abort? providing a default value? doing something else?)

It's not that it is not important, but code usability is important as well, lest it goes on the way of C++ hell (though I don't think it can get that bad, there are some warts - like "methods" and traits)

Rust is a language primarily built for systems programming. It has many strengths to celebrate, and brings curated best practices as well as its own novel features to systems programming.

However, most programmers in 2016 aren't "systems programmers" anymore. At the very least, most programmers who actively talk-up new technologies on web forums are not systems programmers. The majority (or at least the majority of the vocal and socially engaged) are web developers, mobile developers, CRUD apps and microservices, etc.

As interesting as Rust may be in the systems space, it doesn't bring much compelling new hype to the table for web stuff.

You have yet-another-concurrency-approach? That's great, but most web developers rely on an app server or low-level library for that, and seldom have to think about concurrency up at the level of their own code.

You have an approach for memory safety without a garbage collector? That's great, but most web developers have never even had to think much about garbage collection. Java, Go, etc... the garbage collection performance of all these languages is on a level that makes this a moot point 99.999% of the time.

You have a seamless FFI for integrating with C code? That's great, but after 20 years of web development I can count on one hand the number of times I've seen a project do this. And those examples were Perl-based CGI apps way back in the day.

Rust people seem almost dumbfounded that everyone hasn't jumped all over their language yet. And from a systems programmer perspective, memory safety without garbage collection does sound amazing. But you guys really need to understand that Hacker News and Reddit hype is driven by web developers, and that community isn't even sure whether or not type safety is a worthwhile feature! So really, it's amazing that you've managed to draw as much hype as you have. It's not about the mainstream popularity of your language, it's about the mainstream popularity of your field.

1. several small languages also introduce the type system to try to solve the memory safety problem. But all of them are less famous. Because there are many reasons that makes a language being accepted massively from other tons.

2. in many cases, it is not hard to do manual memory management. There are many great software done with manual memory management. Although I admit the quest to memory management is always wonderful for system. But go to the follow #3.

3. linear/affine type system[1] is not the panacea. The case of "used exactly once" is just a small case. Forcely to this pattern makes large boilerplates. And constraints and verifications to system can not be done many levels and aspects. Is this truly valuable to add all into type system?

4. memory safety of Rust comes with price, which have added many complexities and language burdens to itself. Who like to read the following function declaration?(just borrowed as example):

fn foo<'a, 'b>(x: &'a str, y: &'b str) -> &'a str

5. So, finally, the question arises: does the current form of the memory safety of Rust deserve as the hope of next industry language? I'm afraid...

There must be many HNers who work at Facebook. Anyone willing to make a throwaway account and tell us how it feels from the inside for Facebook to be one the wrong side of so many ethical issues? It just seems like in so many dimensions they've been caught saying wrong things or appearing to outright lie, and I'm curious how developers who work for them think about aiding a company that seems to be so compromised at the moment. Now that it's fairly clear that the service doesn't serve any unique, unambiguously positive purpose, what world-changing mission can you possibly decide that Facebook is achieving these days?

This is a race-to-the-bottom. Everyone in this whole area has to compete with whoever is the scummiest exploiter unless they really go out of their way to sell their service with privacy and ethics as the top feature. So, some ethical niche services can exist, but meanwhile, everyone else is screwed, and network effects make any niche thing stay pretty irrelevant.

The only way to avoid races-to-the-bottom in a competitive market is with real enforceable regulation that outlaws the worst shit and requires truly effective disclosures otherwise. That's not easy, sometimes it's impossible, and it often has major negative side-effects and problems, but whether or not we determine that regulation is worth it or not, we know that races-to-the-bottom are a real thing, so we can give some leeway that each company isn't actively trying to be malicious they are just competing in a race-to-the-bottom situation (and we can reject the dogmatic free-market people who deny that this and all sorts of other natural market-failures exist).

You can replace "Facebook" with thousands of other companies. Everyone is doing this because the cost is low, its easy, and the return is massive. The sole service my roommate's company does is match your customer with data about them from countless other sources.

If you want a peek into a small section of this type of data, go build a facebook ad. You can see all the targeting options. You can upload a list of email and build a "look a like" audience of people who are similar to your customers.

A company called cartalytics will let a brand purchase lists of people who have bought a specific product in the past 6 months and show them ads. Ex. If you've bought a big mac (with a credit or debit card) in the last month, I can show you McDonalds ads.. but they are super expensive.

I've been saying this for years. It's pretty clear that if Facebook told regular users just how much they knew, those users would be seriously creeped out (though, these days, probably not creeped out enough to do anything about it). I expect that another example of this would be the ability of their facial recognition system and the breadth of the database behind it.

Users are Facebook's product, and they should expect to be treated as such. The Facebook site and associated services are just infrastructure designed to a) collect information on users and b) give advertisers optimal access to those users.

edit: also, obviously, Facebook is not the only company engaged in this sort of thing. It's all around us.

Facebook have to respond to Data Subject Access Requests in the UK, which oblige them to send you every piece of personally-linked information - for a maximum 10 fee.

I did this with my bank a few years back and got back a box file full of credit scores, lending decisions and other stuff they'd never normally expose. Facebook's data for a busy user is going to be enormous by comparison - has anyone done this lately (and published / summarised the results?)

That might explain a pretty creepy thing Facebook did the other day to me.

I just created a new Facebook account after maybe 4 years of radio silence. Two years ago, I had a job doing IT contracting; often I would go to businesses and repair laptops or run cable to a COM room. We had very very few residential clients since they weren't worth our time; the few that we did have were really just courtesy for doing business for so long. I went to one residents home a SINGLE time, hardly interacted with the man, and he definitely did not know my last name.

Guess who pops up on my "Suggested friends", with no mutual friends or place of work or any similar "liked" pages? Yeah, that one client.

Similarly, we worked in a small office in a cold storage facility, and Facebook also suggested that I add their accountant as my friend.

It's really creepy, but if Facebook was able to know that I worked at that employer then it's possible that it was able to make the connection.

>"For instance, opting out of Oracles Datalogix, which provides about 350 types of data to Facebook according to our analysis, requires sending a written request, along with a copy of government-issued identification in postal mail to Oracles chief privacy officer."

This is outrageous. Why is the onus on a user who never gave permission to a data broker in the first place? They deal in digitial domain when it comes to selling your data when it comes to consumers rights and concerns they operate exclusively via snail mail?

Don't expect this to change any time soon. These brokers have the US Electorate in their pocket. Bought and paid for.

I think this forum has to recognize a lot of work being done in the valley especially Google and Facebook is ethically questionable and seeking to brush it under the carpet or 'normalize' it perpetuates a dissonance. For starters the whole mythology of liberal freedom loving nerds sits in stark contrast to the reality of actively developing and enabling authoritarian technologies.

The curious consequence of the willful ignorance on one's own actions is the continued posturing and stark dissonance in expecting ethical behavior from other segments of society. If you can't behave ethically you can't expect it from others.

That level of dissonance is untenable and ultimately every intelligent person has to realize not recognizing and confronting unethical behavior is a race to the bottom and will reflect in every aspect of life around you.

Is there anyone out there making a paid, zero advertising/data collecting social network? What if this service allowed you to buy access for 50 of your closest friends and family? I would think if it was executed properly and you provided a standard "I'm deleting Facebook and here is why, apply to join my paid for network group" post people would consider making the jump. I know there's a lot to Facebook and I wouldn't expect some new company to stack up feature for feature. Just give me chat, text/image posts and the wall and I will be happy that I can keep up with my close friends and family. I wouldn't be entirely surprised or disappointed if Apple attempted something like this on their Messages platform but I would just hope they'd make it accessible to all phone/computer/tablet users.

Sort of related, have people noticed or have they officially announced that they are tagging photos on the alt html field with a description of the actual photo? It's pretty accurate with texts like "two people smiling, with baby".

When I got married my husband pretty much immediately showed up as my spouse on my transunion credit report as my spouse. How did they know that? Our names are different. At the time we didn't have any loans together. We lived together but so do siblngs and roommates. We didn't register for any wedding registries or send out any announcements. Our wedding consisted of signing some paperwork at city Hall. They also marked me as "Active Duty Military or Dependant" (hubby is in the army so I became a "dependant" when we got married). So the only logical explanation is transunion can access DEERS, but I would hope the DoD doesn't allow random private companies access to DEERS... They DO have a website where you can lookup if someone is covered under the SCRA but dependants aren't covered under the SCRA and don't show up when queried (I tried).

Again this is my credit report. I didn't report a change in my martial status to any of my financial institutions. Not banks, not credit cards, and we already had a joint account for two years before we were married.

I cant speak for other countries, but why do American people seem to trust companies more than they do the government? I mean, it is completely known that companies are here to make money, and publicly traded companies are here to please their investors so they will do whatever it takes to do that. They study us, classify us, categorize us, manipulate us. They spend billions in research so they can make that 'perfectly tailored' ad to get us to buy their product. They are constantly buying our data and selling our data, JUST to make their investors happy, and we seem to always just shrug it off.

'Meh.'

I am honestly more ok with the government having this data to keep tabs on me than these hundreds of other companies treating my personal info like it's a trading card.

It's funny that a newspaper criticizes Facebook's data mining practices ... but when I opened the article on their website, my privacy badger addon told me that 16 scripts had been blocked (facebook!, twitter, google analytics, chartbeat, outbrain, pardot, ...). Then I read through the article and half way down they throw me a huge banner in the way telling me to like their page on Facebook :/ So basically they preach something and do something else, they are really a bunch of hypocrites!

> One Facebook broker, Acxiom, requires people to send the last four digits of their social security number to obtain their data.

This is just one of the many WTFs that Facebook apparently actively supports.

In what world, what possible explanation was this ever a good idea? Or a reasonable idea? Either the US SSN is like a password (it's not) then how did Acxiom get their hands on it, or it isn't (correct) and it doesn't serve the purpose for identification.

Letting this sort of crap run wild also affects what is considered "normal" or common privacy in other parts of the world, like the EU, it slides the window. Continuously pushing the boundaries against people watching helplessly as layer upon layer of foundations of surveillance are built. Authorities don't do much until adoption is way beyond the curve of network effect, or they do it weirdly. And by then people think it's normal or acceptable.

Already now, on countless popular sites, advertising transgresses heavily on not only guidelines but also law. Medical claims, product placement, child advertising, you name it.

What can we do to not make the lowest common denominator decide what's normal?

Speaking of which, perhaps someone can shed some light on the suggested friends feature. Many people suspected it uses GPS/Wifi to perform location based friend suggestions, as well as contact book uploading. However, it doesn't really explain my own case:

I recently encountered a friend suggestion for someone that I only know online (IRC and later, Google Hangout). I don't really know who they are other than a name (as exposed by GHangout). I've never met them as they are in a completely different country. I don't have the facebook app and the messenger app is forbidden to read my contacts as per CyanogenMod's Privacy Guard. I fail to understand how FB can suggest this? The only possible reason I can think of is when they searched my name on Facebook. How else can they do it?

When I read this article, I was expecting to see a description of what they collect from users. But the real controversial and creepy part is what's available from the data brokers.

The fact Facebook is aggregating all this to make for better advertising options is discomforting, to be sure.

The most concerning aspect of the article is that these data brokers are able to correlate my purchases. It seems inevitable that insurance companies will take all of these individual data points into account: "We're sorry Mr. Register, because you buy McDonald's every week we'll have to raise your life insurance rates."

This became painfully obvious when LinkedIn's algorithm started making extremely circuitous connections that freaked people out. People in a relative manner are painfully stupid, algorithms are ridiculously capable. The result is freak out. Facebook being psychologically aware, protected its users from the truth before it could be known. It was long ago that google's Eric Schmidt said "we are on the verge of predicting our users thoughts" google is just as slick as Facebook.

I haven't had a Facebook account for years, and my phone number was never associated with it. Yesterday I visited the site on my laptop to look up the page for a tavern that's re-opening. 1/2 hr later I got a text from 32665 with a Facebook confirmation code. WTF; creeped the hell outa me. I replied with "stop" and received verification that "Texts from Facebook are now turned off." I visited their site again to request whatever data they have on me, but even though I checked the "I don't have a Facebook account" button for the request they insist that I log in to finish the process. Not sure where to go from here with it.

How do the data brokers know whether one shops at dollar stores? Who is leaking our inforamtion to the brokers? Is the store or the credit card company releasing information to a third party? Store gets the customer name from the credit card. Credit card company knows that a transaction took place at the dollar store. Any other possibilities?

It is easy to forget that FB is a media company and as such it is not only making money by selling ads but also by manipulating the masses. They may focus today on serving ads making $3.6 billions annually. Tomorrow they may focus on something else for example on serving fake news for manipulatig elections and making 10x more. The data they collect is only a means to an end and I am afraid I won't like the end when it arrives.

> Of the 92 brokers she identified that accepted opt-outs, 65 of them required her to submit a form of identification such as a driver's license. In the end, she could not remove her data from the majority of providers.

I'm operating on the assumption that some day soon there will be a market for personal services.

I know it sounds crazy now but people also thought no regular person would ever need a personal computer. IMO the next computing revolution is in personal appliances with OS (but black box, plug-n-play to regular foks) that serve up usable voice recognition and other SaaS stacks that replace the "free" data black holes currently in use.

Some day I'll need to reclassify all my cyberpunk books as non-fiction it seems. We are nearly at the point of having real-life equivalents of things like information brokers and a Central Intelligence Corporation. The funny thing is the real companies I've seen are more creepy than the over-the-top portrayals of your typical dystopian corporate future. Worse yet, they do a better job of automating it all vs. the typical human intelligence or hacking missions in those types of books.

I have looked through the report. The only useful information was brief description of attack methods, everything else looks like a list of general recommendations one can find on the OWASP website.

As I understand from report the main methods used were:

- sendind emails with executable files that victims for some reason executed

- phishing

So, they used script kiddie level tools anyone could use (and they are cheap; you don't have to buy expensive zero-day exploits on a black market). But of course this could be done intentionally so it looks amateur-ish.

This attacks could be easily mitigated. First, OS and applications should not run unknown files from Internet (because some people got used to double click on everything they get in email), second, we should start using physical cryptographic keys instead of passwords. Common people cannot handle passwords, they either make easily guessed passwords or enter them everywhere without thinking. I hate passwords too because they are hard to remember (and please don't suggest that I should download some software and upload my passwords to a "cloud" in NSA-controlled country).

By the way iOS is the only popular operating system I know that doesn't allow to execute files downloaded from web or emails. Apple did it the right way.

The report also contains a pretty useless firewall rule named "PAS TOOL PHP WEB KIT FOUND" that can be used to search malware in PHP files. It is interesting that they have replaced digits in 'base64_decode' function name with regexp as if there were any other similar functions.

It seems unlikely that email hacking will stop in the future. If the leaked emails actually influenced the elections, it was because of their content. I've heard exactly zero credible claims that the leaked emails were falsified in any way. Perhaps if political candidates/party executives are going to do unethical/illegal things, they shouldn't discuss them over email.

As an aside, for those looking to understand YARA rules, [1] provides a brief introduction and [2] introduces how to write them. I needed to look it up myself, but seems relatively straightforward if you have a programming background.

tl;dr: YARA rules are a method of categorizing malware based on their characteristics. So the PDF here released a YARA rule to determine a specific piece of malware used in the hack (it's not clear to me what it identifies, other than a PHP script).

For convenience, here's the YARA rule presented in the PDF formatted to be more readable:

Jeez people, read the report, it isn't any kind of justification of anything, its just a fairly generic don't do this, like I see 100 times a week at work. The real details were likely shown to congress and the senate (or at least a portion of it). Those are the only people who can say if the actual attack was real or imagined. Do you think the British and Americans were going to publish stories about Enigma back in WW2 in the Times during the war? There were like a handful of people in the world who knew the details.

While we technical folks would love to see all the details that's not how intelligence works. Some things have to be secret even though these days everything becomes a conspiracy and a political controversy and a tweet storm.

That said I doubt anyone in either party committee had any idea how security works; even worse is that much of the US government is (and will be) lead by political benefactors with an axe to grind and not people with a real clue about modern security either so expect nothing much different in the future until someone hacks the nuclear "football".

There have been many proven hacks from many states that are far worse (the Chinese Fighter plane that looks almost identical to the F35 come to mind) than exposing the DNC's dirty laundry. No one is denying that the emails are real. This seems like some sort of distraction.

Page 5 lists a YARA signature names "PAS_TOOL_PHP_WEB_KIT" that is supposed to match some kind of payload from the attack. It looks generic but is surprisingly specific.

A quick search reveals that it happens to exactly match [1] (if you fix a few obvious bugs where the github code uses $COOKIE instead of $_COOKIE, or produces base64decode instead of base64_decode. The attackers probably fixed that in production). Apart from the exact combination of three `isset` and two `_COOKIE`, that code starts with the unusual sequence `<?php $l___l_='base'.(32*2).'de'.'code';` which happens to be matched by the (also very unusual) regex from the report. It also ticks all other boxes from the provided signature.

I just found that within five minutes by searching github. It seems like an encrypted payload that can be executed by visiting the php page while having the password in a POST parameter or in a Cookie.

I'm not an expert, but the encryption looks very simple. Maybe somebody feels up to the challenge to try some statistical analysis or similar on it?

Edit 2: The obfuscation used in the russian PHP shells looked awfully familiar, I think the shell they're using could very well be this one http://profexer.name/pas/download.php originally shared on a .ru hacker forum.

There doesn't seem to be much new information there. A bunch of IP addresses, file hashes to look for, and general network security advice, in addition to a history of the attacks which was already public, and an explicit attribution to the Russians.

They mention a phishing attack which took place after the election, but don't give any further details.

Is this more or less reputable than the clear and unambiguous claims of Craig Murray regarding the DNC leak, which he has stated clearly were the result of him personally traveling to DC, acquiring the data dump face to face from a non-Russian DNC insider, and then returning to the UK to give them to Assange himself. If the us-cert.gov report is to be believed, then both Assange and Murray are liars. Both can not be true. Who is more credible? Perhaps we can compare the history of truth reliability in claims from each party? Would that be a reasonable approach to ascertain who is lying here and who is telling the truth?

At least the report is short. As others have stated, it doesn't really lay out any new evidence to believe the Russian government was behind the hack. It lays out information that almost looks like evidence, such as a list of usernames, but doesn't discuss how the information is relevant to anything. There is an assertion that three teams were involved, and that two teams communicated with each other, but no discussion of where this information comes from or why anyone should care how many teams there were. I get the feeling that there's a message for someone, but I'm certainly not the intended recipient.

The advice on avoiding similar hacks in the future is a grab bag. Near the end it encourages using /etc/shadow on POSIX systems. I installed Linux on my personal computer in 1999. Since then, I've installed several Linux distributions, FreeBSD, OpenBSD, Plan 9, Inferno, etc. I can't remember any installation offering to store password hashes in /etc/passwd. Some of the advice is better, but not all of it is. I'm honestly disappointed. Perhaps this is a wake-up to somebody, but I would hope Sony's hack would have already served that purpose.

I've done security remediation for the U.S. Govt. About the same vulns you would expect on 2003 PHP apps that haven't been updated since (OS or otherwise). Congress doesn't budget for server/app maintenance, simple as that.

Even if Wikileaks never published anything, She would have still lost. She had the greatest help, money and collusion from government, media, international community elites and her party and still lost against the most unpopular and unfit candidate of all time who got more than 300 electoral votes. That's how loser and corrupt she is. Just get over it.

"At least one targeted individual activated links to malware hosted on operational infrastructure of opened attachments containing malware" - I pity that individual, like I'm sure he's getting blamed in the party like 'Hey, aren't you the piece of work who clicked a link'

That big list of code names is a hoot, it seems they mixed the names of soldiers from metal gear solid 5 with a list of names they got off a small IRC server as well as some codenames out of James Bond novels Ian Fleming never wrote plus some fragments of mime headings salted with just a little bit of line noise.

The report released by the US govt only contains a birds-eye view of the hacking incident and not much technical details. But they do reference APT28 and APT29 which are described in reports from FireEye in 2014 and 2015:

The evidence is circumstantial, but there is so much of it that I think you can confidently say that Russia is behind it. For example, compile times pointing towards office workers in Moscow, Russian language settings and so on.

Folks, now is the time that we need to make it clear that posturing and PR statements do not constitute valid, independently verifiable evidence. As a citizen of the United States, I am beyond terrified that our government has made public statements, buttressed by newspaper articles supported by nothing but anonymous sources[1], vilifying Russia for a nation-state-level cyberattack. The support for such claims, as presented, is the "sophistication" of the attack, which is not evidenced here (phishing is not a particularly sophisticated means of entry). At best, this is a mistake, and at worst, it wreaks of anti-Russia propaganda that will only serve to escalate tensions between the two countries. Every single person who absorbs a report like this without seeking supporting evidence (note that this report immediately starts by claiming Russia's involvement, and never provides support) is, to some extent, culpable in a hypothetical reality where the US Government is blatantly wrong about this one.

There's only one thing we can do at this point: File Freedom of Information requests. The fine folks at Muckrock[2] make this absurdly easy. Send requests to the CIA and FBI -- hold them accountable to their statements, which have to date been unsupported, that Russia as a nation-state entity was behind anything.

They ought to encourage the use of prepared statements to defend against SQL injections. It's the only way to handle that threat, yet the report does not mention it:

"5. Input Validation - Input validation is a method of sanitizing untrusted user input provided by users of a web application, and may prevent many types of web application security flaws, such as SQLi, XSS, and command injection."

I think that there is a second assumption which is overlooked:Hypothetically, let us assume that the Russians did break in and steal emails etc. Governments do so all the time so it could well be true. Now the question to me is: why would they release all the emails to Wikileaks? The emails seem relatively benign and not very damming of HRC. Why not keep the information in your back pocket until they can be researched and leveraged? Releasing them diminishes their value to an intelligence agency. And why not release selected HRC's (herself) emails? Surely the Russians could have gotten those of they tried. Assuming she's not squeaky clean, they could have released selected individual emails anonymously and ensured a Trump win, plus keep other assets for later. Would a better hypothesis be that US intelligence services saw break-ins and so released the information they knew foreign governments could be used as leverage aagainst a likely future president? This way they immunize against the information's use, plus blame the Russians but the US would dearly like to punish Russia for their victory in Syria anyway. This makes more sense to me but am interested in why this hypothesis is wrong or less likely.

Why does everyone thinks that only Russians can write such a malware, that too in python!

Also, does it dawns on anyone that anyone can actually take this malware up and reverse engineer it and repurpose it. All it takes is changing one JSON blob embedded in the code to point to your own servers for CNC, use your own AES IV/Key.

Also, I find it funny that they use embedded time stamps and resource locales as a proof of anything. Didn't anyone ever use Resource Hacker or 'strings' command? Is it really this hard to scrub or falsify timestamps in a DLL/EXE?

The most damning proof would have been some SSL certificate reused in a known compromised server for CnC. I heard rumors around it but nowhere in the analysis this was highlighted or discussed.

Nonsense. I'm doing cybersecurity analysis for a Navy program this very day. To the person that says "The attackers did use stealthy persistence techniques often called 'rootkits'" -- you know exactly nothing about what you're talking about.

A rootkit is the means to obtain "root" permissions which is an exclusive feature of UNIX/Linux operating systems. Powershell is a Windows product... these systems are Windows based. No rootkit. Period.

At least some of the DNC users who had VPN access (which, presumably terminated "behind the firewall") had local Administrator rights on the PCs they used [1]. Getting one of those people to load malware and piggybacking on their VPN connection (letting them enter 2FA if there even was any) was likely a cinch.

There's nothing that I've read anywhere that makes me think the DNC was any kind of difficult target to compromise. Likely their information security posture was on par industry norms for small office networks-- absolutely terrible.

ok i think i know what happened: Obama forced FBI to produce the report, but they got nothing, so they filled it mostly with irrelevant slightly-more complicated mumbo jumbo than Obama can understand to slide under his scrutiny, and then he pushes it out to the public without first consulting an actual security professional.

It's clear that this is being done to validate their lies about the Russian's hacking. The US-CERT report came out today on this. I understand all this content and it is very limited scope. It does not provide any validation that Russia was involved in any kind of hacking against the US. They described what is probably the most common form of spear-phish hacking, put Russia's name on it, and listed a bunch of other hacking tools which are made by hackers who actually claim to be part of ISIS (probably CIA assets, looks to me like they are trying to false flag this) https://en.wikipedia.org/w/index.php?title=Fancy_Bear&oldid=...

Reality check nothing because there were no bombshells found like James Coomey re-opening the FBI's investigation against that woman. A woman who nationally especially compared to Obama is highly unlikeable with a horrible public image. Though we're stuck with that crazy man... losing game either way!

Actually, this title doesn't do the service justice -- it yields detail clear down to local offices and gives a detail '+' link for each to get details like contact information.

For example, this is what is returned for a given, random Sunnyvale, CA address; the lone change I would suggest is to have the county and then city offices listed last to maintain a sequence of decreasing granularity. Note that Sunnyvale is an example of at-large city council representation, so all are listed. Very nicely done!

The feature I've been looking for but can't seem to find is a calendar view of when your elected officials are up for election.

Virginia for example holds their major state elections the year after Presidential elections. Local elections come up at seemingly random times. I vote absentee and remember coming home for a visit and my parents asking me to vote in some small election that was being held in the middle of summer.

Being able to add all of the offices to my calendar, ideally with important deadlines like when you can apply for absentee, vote early, and when you have to have your ballot in by would be amazing.

I got my list and was pleased to see things like Auditor and Coroner, but why no state senator or state representative?

There's also no judicial branch to be found, which may not matter much for the Federal Supreme Court, because they're appointed, but just about every jurisdiction I fall under, State Supreme, State Appeals, Local Criminal, Local family, has an elected judge.

Judges tend to be a major source of ballot fatigue, because nobody knows who they are. You could argue that's a good thing, because then only informed voters are selecting them, but you could also argue that it's a bad thing, because only the self-interested are voting for them.

This website believes the Auditor-Controller for Santa Barbara County is Robert Geis, but he retired last March and was replaced by Theodore A. Fallati. I also wouldn't say that it is fair to say that this is "everyone": it is missing all of the local special districts (such as the Goleta Water District and the Isla Vista Recreation and Park District) that I would argue are much more important to my life than the person who is currently the "Treasurer-Tax Collector-Public Admin." (someone I believe I have never actually met, despite having been extremely active in local politics for years and having run to be a County Supervisor, even now being elected to the board of a new district which will come into existence in March 2017).

Really useful, love the idea. There may be some data troubles though, I looked up myself and found that the Twitter link for Senator Maria Cantwell goes to a porn account, not her actual profile. Yikes!

Cool service. Beware that the Wikipedia links may direct to different people with the same name, particularly for local offices. For example, my Assessor links to a British wartime codebreaker and my Surveyor links to a Kiwi rugby star.

If you guys have all this information, I would love to see a breakdown by subject, for instance, all the elected positions that have something to do with managing elections, and their next election date (so I know who to donate money to if I want to maximize health of elections nationwide).

Is the point so that I can tweet or mail letters to my elected officials?

Is tweeting at officials supposed to help my station in life? Just seems ridiculous to me that tweeting would be taken seriously. I suppose it could be taken seriously but that would actually scare me more.

I always believed that if you want to make a difference than vote with your wallet and I don't mean to donate money to politicians. I mean to make purchases from companies you respect.

A technical tour de force, but the premise is flawed. "I" have no representatives. "We" have representatives as a group. The mob who rules by force of numbers; all strictly democratically acting.

Right off the top, the president, VP, both senators, federal and state representative, governor, and lieutenant governor; every one of them is 100% useless to me personally, because not a single one of them shares even one tiny insignificant view that is important to me. Sure, hey, that's the breaks, but let's not pretend they represent me.

The main difference between TCP and UDP, as this programmer discovered, relates to quality of realtime service.

Times to use UDP over TCP

* When you need the lowest latency * When LATE data is worse than GAPS (loss of) in data. * When you want to implement your own form of error correction to handle late/missing/mangled data.

TCP is best when

* You need all of the data to arrive, period. * You want to automatically make a rough best estimate use of the available connection for /rate/ of transfer.

For a videogame, having a general control channel to the co-ordination server in TCP is fine. Having interactive asset downloads (level setup) over TCP is fine. Interactive player movements /probably/ should be UDP. Very likely with a mix of forward error correction and major snapshot syncs for critical data (moving entity absolute location, etc).

The recommendation to avoid TCP altogether is surprising to me. Having encountered a number of video-conferencing systems which are in a similar space, it seems pretty standard to have separate real-time and control sockets on UDP and TCP respectively. I skimmed the linked paper and didn't find it conclusive; can someone summarize how it is that having a TCP socket can affect UDP traffic on the same interface?

All that said, I certainly see the argument for an all-UDP protocol in terms of defining your own retransmission approach, or attempting to avoid it altogether with forward error correction or whatever.

It's aimed at people who don't know the difference between UDP and TCP (and possibly wet string). Yet he recommends they implement their own reliably protocol over UDP, and they avoid TCP because it's better to implement your own QoS?

Why not add obtaining PhD in quantum mechanics just to round it out? It wouldn't alter the odds of pulling it off overly.

It's been a while since my networking class, but if I remember correctly with UDP you have some serious issues where you can end up clobbering your network, filling up buffers in the middle and dropping tons of packets. The lack of congestion control is a huge no-no.

For instance in the example he gives, sure you can tolerate dropped packets for player-position data, but how do you know if you can tolerate sending at 10Hz 100Hz 1000Hz? Even with TCP you can't (I think....) programmatically adapt to the size of your pipe. That's kinda abstracted away for you so that you just say "send file A to B" and it does it for you

Are you supposed to write your own congestion control in userland???? Seems like this should be a solved problem

I tried to build an UDP library once in C# with different methods (BeginReceiveFrom, ReceiveFromAsync, ReceiveFrom). You learn a ton and it's quite interesting. My goal was to recreate something similar to .NET remoting based on UDP.

If you're writing an UDP library, you also need to think of performance, object pooling, connection buffers, threading/async issues and on top of that you also want to provide a nice API to the outside world for the client and server... Well, it gets messy...

If you're into this thing, I can advice you to look at haxe libraries. Learned a lot of them. There are very simple, idiomatic server/client-side implementations which are easy to follow, even if you don't know haxe [1][2].

Does anyone know if SCTP is suitable / in use by any games? It supports streams to work around the head-of-the-line blocking problem TCP runs into and it also supports opt-in unreliable delivery for game data. On the surface seems ideal for games, though I don't know if its getting much actual use.

"TCP has an option you can set that fixes this behavior called TCP_NODELAY"

That fixes nothing. Now you are sending too many small packets using too many syscalls. Just like UDP, buffer in user space, send in one go. If you do that, TCP_NODELAY makes no difference. (The exception is user input, if you want to send those as they happen, use TCP_NODELAY, but think about the why ... it has little to do with what this article is talking about.)

Games likely send data only around 25 times per second, and ping is likely < 50ms. Waiting on a dropped packet and the delay it causes is unnoticeable. Added that clients will need some kind of latency compensation and prediction, independent of the TCP/UDP choice. Delays and then bursts of 100ms or such are doable.

The problem starts when the connection stalls for more than 100ms, especially in high bandwidth games. During the stall both behave the same. After the stall, TCP will be playing catchup and wasting more time receiving outdated data, and handing it to user space in order. UDP just passes on what is received, with a lot less catching up, and maybe some dropping of packets.

But gameplay has been degraded in both cases. UDP just has a higher chance of masking and shortening degradation more.

Anything more than that is basically cargo-culting, like this article.

Anyone here have any experience using QUIC in any application of their own?

The custom congestion control makes me wonder if it only works alongside TCP traffic - once everything goes QUIC, then what happens? I looked for a bit about the story in ancient history of some blazing fast server OS TCP implementation that broke the rules so it fell over when more than one server was on the network, but couldn't find it.

In short, TCP will work hard to deliver 100% of the packets. So when a packet is lost, TCP asks to re-send the packet. This is fine to display a webpage or send a file, but it can't be tolerated in games where time continuity matters. I think it's the same issue in VOIP and video conferencing too.

If you want to get familiar with TCP/UDP and are not gun-shy with C I would suggest the Pocket Socket Guide (now Practical Guide to TCP sockets)[1]. I have a really old edition, but it ages extremely well, and its one of the books I always use to refresh myself on network programming basics.

The reality is that these days there generally isn't any packet loss, so UDP vs TCP isn't such an issue as it might have been in the past. In fact TCP has a number of advantages these days such as easier firewall traversal, WebSockets, etc.

A huge problem with TCP wrt gaming is the default ACK frequency on Windows which is set to 2. This effectively almost doubles the latency of game connections (sending/receiving a lot of time-sensitive small packets).

It can be changed with a registry setting (TcpAckFrequency) but you can't expect even a significant fraction of your users to do that. Why this isn't a per-connection option sort of like TCP_NODELAY is beyond me.

I thought that it was somebody who has recently discovered the existence ofUDP protocol and brags about that to the world, but from skimming the article,it actually has some non-trivial remarks about UDP and TCP.

I think tcp have an unfair reputation. Our networks are better now then 30 years ago ... Worst case latency for tcp is like 3 seconds, compared to the packet never arriving. The trick is to hide the lag with animations. I think google, facebook, and world of warcraft use tcp for their real time apps !?

I really love these IndieHackers interviews but surprisingly find them somewhat demotivating.

Even the more successful interviewees make less revenue per month than I can make through straight consulting. And they're the success storiesmost people (including myself) make far less per month from their products.

How do people keep motivated to work on side projects when consulting is so much more profitable?

Hi everybody, here is Alex, the co-founder of Creative Tim. Hope the information from this interview will help you achieve more with your current business or give you the courage to start your own business.

If you have any suggestions or feedback I would be glad to talk with you.

I read some of the comments, many saying that $17,000/mo. isn't a lot of money, but don't forget that Creative Tim is based in Romania.

Salaries in Romania aren't as good as in the States.

I'm currently based in Italy and a programmer can make as little as 1,200/mo. (salaries in Italy are much higher than in Romania, possibly 100-200% more). I make more than that freelancing, but I don't think 17,000/mo. is bad, both in general and for a company based in Romania.

For the last 2.5 years I've been working at DigitalOcean as a remote employee. DO has more than 50% of the staff remote. I think it's important that a large chunk of a company & team be remote, to be successful in the exercise, so that there be a forcing function to use asynchronous communications. It's been really life changing for me.

We have a bunch of style of remotees; work from home, work from coffeeshop, work from coworking spaces and work from a new place every day.

I've tried all of these styles, starting with work from home, then getting super depressive from loneliness and getting a coworking space (DO pays for it), then realizing I didn't use it and instead working from a mix of home, coffeeshop, and random visits I pay to my friends. And now I've been switching to mostly working from the crazyest settings I can think of. I worked from camping spots, from a sailboat, in a national park, on a beach in Asia, and it all works out once you're used to "travelling from anywhere".

I'm having the best time of my life by experimenting with what it really means when your ability to feed is now decoupled from your physical location. I feel like I'm living in a future that maybe more of the people will have the chance to live soon, and that it's my duty to find a "Theory of Working In The Future". My first theorem is "Don't stay home everyday else you shall go crazy".

Also, think about the implications of OneWeb and the constellation that SpaceX has been working on; I'm thinking "what if I could get low latency/high bandwidth internet from the middle of any ocean"? The future looks bright.

As a remote worker, I would really enjoy a coworking space at least 1-2 times per week, but the hour commute and the expense just are not worth it for me. I've been working remotely for two years and absolutely go stir crazy, and even into fits of depression, when I'm not really pro-active about getting out.

I've been volunteering with a community theatre this year, which gets me out of the house after work most week days. My mood goes up about 10x when I do this. During the month or so downtime between plays though, things start going bad again.

I'm also involved with some other meetups/clubs and do piano lessons. Putting together a deliberate schedule of "outside activities", at least for me, is absolutely necessary to make it work.

Overall a well written article with some valid criticisms. After 2 years working remotely I observed roughly the same facts, but have a slightly different spin on the whole thing.

Like everything in life, working remotely has tradeoffs. One person's pro is someone else's con.

Pro: I potentially gained hours of my life back every day. I know many people who work for similar companies who spend more than an hour commuting every day. They take less-desirable jobs and leave behind their coworkers just so they can get an hour of their life back and reduce their commute from 2+ hrs to 30 minutes.

Pro: I can disassociate my COL from the company's choice of office location.

Pro: I can cook food in the crock pot on a regular basis without worrying about my house burning down.

Pro: I can walk my dogs during my lunch break (or pick up food from the grocery store or run some other errand).

Pro: (subjective) My coworkers competence is higher than what I typically observe from companies that limit their hiring pool to people who live within a few minutes (or hours) of one office building.

Pro: I can and have worked from a hammock, a camper van in a state park, a car on a road trip, and a cafe in Paris.

Pro: No requirement to waste literally hours of my day in bullshit pre-lunch planning, post-lunch coffee, etc. When I'm onsite I'm happy to spend lots of time on watercooler talk, but I'm not obligated to do it every work day.

Pro: I can go hiking on my lunch break.

Con: I don't see anyone but my spouse. I have to go to additional meetups in order to make up for this.

Con: Not as much face time with execs. This can matter politically and for your career.

Working remotely killed my mental health. Even with Slack, Hangouts, and all the rest, I became lonely, had difficulty focusing on work, and generally became significantly less happy and productive. I tried a coworking space, and while I made some great friends there, it still wasn't doing it for me.

I've been back at an office job for about 3 months now and it's been a huge improvement. I love being in an office with a team of people all working on the same thing, solving problems together, and socializing.

Obviously, different things work for different people, but I wish I hadn't bought into the remote work idea as wholeheartedly as I did. It's important to be aware of what you get out of onsite work in addition to the drawbacks.

As someone who spent the better part of the last decade working remotely, and having read tens of rants about open office floor plans with which I agree 100%, I think that the problem here is that there is no Silver Bullet. Those of us who prefer a results only work environment will never thrive in an open office, and those of us who need human contact will not thrive in a results only work environment.

More likely if you're reading this, you're somewhere between those two extremes. I'm an introvert when I need to get things done, but I'm an extrovert everywhere else. I have seen just as many people crash and burn trying to motivate themselves while working remotely as I have seen people go quietly nuts in an open office.

I am going to be working from a co-working space in the near future, but I suspect that I will still need to spend significant amounts of time on my own in my home office if I want to stay productive. I don't expect that solution to work for anyone else, but after nearly two decades in the software industry, I know what works for me.

I wouldn't consider myself an extreme introvert but I am 100% satisfied working from home (been doing it for 4.5 years now).

The author mentions socialization only in real life. What about online socialization? I still talk to 2 of my best friends in a chat room (used to be IRC, now we use Slack). I keep up with old friends on Facebook. I have discussions and arguments on HN and Reddit.

And probably most importantly, my remote company has a VOIP chat that everyone is on and we routinely have "water cooler" type convos, in addition to serious stuff.

So yeah, I think you can solve this problem without needing real life interactions. Embrace your digital life to the extreme! And, companies hiring remote workers need to support them better, with VOIP and text chat rooms that they can be in (with other employees) and feel like they're part of the team and not just a worker.

It is not just remote work either - I've noticed a trend in my work where although I work in an open plan office surrounded by people, my teams are increasingly "global" which means that usually there is 1 team member on their own in each office.

Although I am surrounded by people, since you're not working with these other people, and/or there are desk moves every 3 months or so as teams are growing, you only ever end up with very superficial "friendships"/social interactions. "Hello" "How was your weekend" "Which team are you on?" "I am on this team" etc etc. You're just doing it out of politeness really, then in a month or two they'll move on to another team/office or there will be another desk move and you're back to square one, surrounded by strangers.

It is not unusual for me to go a whole day in an office surrounded by hundreds of coworkers without physically saying anything to anyone apart from "thanks" for holding open the door.

Coworking spaces are a mixed bag. I went there for few months but stopped again.

Good is that you face more serendipity than when working from home. But really, it is not that much more. After only few weeks, the novelty wore off and I got bored and saw more the downsides. Like super small tables, no dual monitor setup, always too cold, the commute, less free fruit, and the people. Some are quite nice and you realize that you need random social encounters but there are also the typical odd people to whom you cannot relate at all (like everywhere). Those people don't hurt but I remember one who reserved the best flexdesk the night before by leaving tons of her post-its and other papers there. No big deal but nobody who makes you happy either.

I knew most people I met there before. Bonding with new people without having a common mission was not easy, it just didn't feel natural (and I am rather the extrovert sales type of guy). So, you can still feel 'alone' in a coworking space.

I think a coworking space makes more sense if you need a space as a team and want or need to see each other f2f on a regular base.

For business meetings or doing interviews, I prefer lobbies of top hotels, they are even more representative than the best coworking spaces and at the end of the month also cheaper with full service included and no extra fee when booking some meeting room. And for two hours working away from home, I am a fan of Starbucks or any coffee ahop with good wifi.

> If youre thinking of working remote, then think about what kind of working environment youre happiest with before you take the job, and make sure youll have that environment available to you.

Seems to me that the best part of remote working is the ability to figure out what the best environment is for you. Don't be a theorist, be an experimentalist: try a bunch of different situations and see what you like best. It sounds like the author started down this path, but stopped too soon (at first).

> Are you sad when a lot of your office is out sick, or are you relieved?

Usually relieved, then I wonder why I bothered with an hour of driving to sit in the office by myself, when I could have done that from home.

> Do you get uncomfortable when youre in quiet environments for too long, or do you revel in them?

Love quiet! My office at work has no windows (not even internal ones); being able to close the door and cut off the outside world is the best!

> Do you feel weirdly lonely when youre in a noisy coffee shop, or do you feel energized?

Annoyed by the noise mostly. Coffee shops are for getting coffee and getting out. Libraries are way better for actual work, IMO.

This echoes my current gig to a tee: worked at home for a couple of months, went stir crazy, found an office. For me, the first office space was a hipster cafe type place that was just too freaking noisy. I moved to a Regus office which was ok but Regus were awful so a few of us clubbed together and got a truly shared office with both closed and open spaces for different type of work. I can highly recommend this setup as some days you just need to hole yourself up in a private enclosed office to do brain work. However the open space promotes social interaction and feeds the soul.

-- negative: lost interest in having friends (and generally the patience needed to talk to people), no social life, no career, work-life balance completely broken

-- positive: sleeping 8 hours a night! (and more if I need to), making walks in the park/training at noon, never really sick, comfortable home office, can be efficient again (only my job doesn't require that). Started having ideas again and thinking about side projects.

Before going remote, during 15 years I was commuting 2+ hours in the morning (so 2 hours again in the evening), sleeping 4 hours a night by the end of the week, dozing off the entire weekend and generally feeling extremely exhausted, mentally and physically, easily catching flu etc.

Would I move back to office employment? I really hope I won't have to.

I've been working remote now for 5 years. All from home (sometimes from a car or a cabin). It takes discipline and limits. In my case - limiting my urges to finish out a project or get a bit further at 2am. It can be a blessing and a curse. I'm getting ready to 'venture out' - spend time at coffee shop or coworking. For the record, I've been 'remote' in many roles; employee and consultant. Overall, for me, I find routine an absolute necessecity to get work done. By routine, I mean 'get dressed as if going into the office' - it's about mindset.

My company is 100% remote. And I recall very clearly in the interview process one of the most important traits to succeed - Be self-aware.

Everyone is different. Every new remote worker figures out what they need to do to make it work for themselves. And we do not all do the same things. But we all know ourselves well enough to try things, see how it works for ourselves, and figure out what changes we need to make it work. Of course, we also talk to each other, give suggestions, etc. But ultimately, to succeed on your own, you have to proactively care for your own mental health. And self-awareness is vital to doing so.

Great post. The spectrum stuff is very important and it's important to be honest to yourself and during the interview process about what you expect. I now tell every place I interview being on-site all week is not gonna work for me. I need to be remote at least 2 days out of the week to recharge. Otherwise I will burn out in less than 6 months. Most places seem to be ok with that and make concessions to letting me do that.

Living abroad I've gone the remote road since 2009. I went through a lot of this same stuff but never associated it with my being remote/working from home, though now looking back I can see it most likely did. My health and mood increased once I started to force myself out to socialize more and once we had our children - the household is always busy and full of noise and life now, versus before when my wife went off to work and I sat alone in a quiet apartment all day long. Since the birth of our second I've been doing my first sprint in any nearby coffee shop each morning and that has improved my mood and productivity even further. I've been toying with the idea of a co-working space and this article has convinced me to give it a go.

We are social creatures, to varying degrees, and if we limit out interaction with others too severely, I think it makes it too easy to look exclusively and excessively inward. I'm all about self-analysis and looking inward but there comes a point when you go too far and it's no longer about reflection but a feedback loop of anxiety/fear/self-doubt... at least that has been the case in my experience! Also, regular exercise (running, lifting) has always helped me out of these emotional funks.

I have a similar issue at the moment. I have tried co-working spaces, but I work with people globally and my day tends to start at 5am. So to get to a co-working space, I'd have to leave at 4:30am, and that's provided they were open (they aren't.)

I've spoke to one about potentially giving me a key to the space, and perhaps that would work, but it's often easier just to roll out of bed, throw on coffee, and start my meetings.

I tend to repeat, and stop going outside much at all, just staying indoors. That then perpetuates my desire to not go outdoors.

It's solvable though, it takes effort on my part to continue experimenting, and trying new things. It only becomes an issue when I just keep repeating the same situation. Definition of insanity, repeating same things, expecting different results.

I joined a co-working space this summer and it was...meh. The people were nice and the facility was great, but I don't really see the point unless your employer is going to pickup the tab and/or you have a small apartment with no office or an insufficient one.

I found myself missing my widescreen monitor, standing desk, and chair. The amount of money I've sunk into my home office felt wasted when I used the co-working space.

And the co-working space would swing between eerily quiet or way too much noise. It seemed weird to go (and pay) for a co-working space where everyone is primarily staring at their laptops.

On the other hand, I would maybe consider going from a two bedroom to a one bedroom apartment if I had a full-time 24/7 use of a co-working space and then the cost would more than even out.

For my job(s) the last 8 years or so, I've had a mix of on-site (10%), travel (40%) and work-from-home (the remainder). There have been very long stretches when I'm neither in the office, nor travelling to meet with customers, however -- sometimes, months. From my standpoint, I can commiserate with the author here. I live out in the hinterlands with my wife and 6 kids, so there's no shortage of social interaction -- however if I've been stuck here for 6 weeks, I begin to get a little stir crazy. I "recharge" by going to trade shows/events/meetups in NYC (which is about an hour and a half away) -- the energy of the city is refreshing, but I wouldn't want to put up with it every day. Just once in a while...

As a remote worker myself, I can definitely resonate with the dark sides of working remotely as described in this article. The lifestyle is mostly portrayed as living the dream, however the lack of social interaction and finding a proper work-life balance where you also set time aside for friends, exercise, or meditation for example, is pretty difficult. That's actually what gave me the idea to bring together a community of remote workers to work, live, and travel the world together where the hassle of accommodation, flights, work spaces, gym passes, and social activities are taken care of so remote workers can enjoy the remote lifestyle to the fullest. If anyone's interested you can find more info here http://www.theremotetrip.com

Excellent personal story on finding the right workspace in a remote situation. Home office vs. Coworking space is a conversation I have with other remote works quite a bit, and the answer is different for everyone.

Working from home might genuinely be the ideal environment for those closest to the introvert end of the spectrum, and I think those are the people who form angelic choirs of blog posts asking if you have met their lord and savior, the Fortress of Infinite Solitude, Home Office Edition. For them, the quiet work environment makes their jobs dramatically more enjoyable. But for me, it was the opposite: Id gone from management (high social interaction) to software development (lower social interaction), and from working in an office (hundreds of people) to working from home (two cats), and expected that this would all be fine.

Working remotely for 7 months now. Three things I do not miss are - neon lights which used to trigger my migraine on a regular basis, - 45-60 minutes of public transport commute in the heat/cold,- the need to look busy when the work is already done.

While I did experience a bit of a breakdown at one point, it is something to overcome. I like the "lazy days" of regular work in coffee shops, libraries and sometimes even pubs (no alcohol during work hours though, that's bad on so many levels). When there is heavy need of cognitive abilities, I tend to stay in, start the day with a cold shower, breakfast and coffee, then work at my standing desk.

I find standing desks really something all offices should support for their workforce. It keeps you active during the day and allows for greater focus. Start small, go for what fits your physique. Use a rubber mat. I would suppose it also helps with what the author calls "off-days", since I do not encounter them. There is always something to improve upon. If no hard-work is available, I just work on documentation and learning new skills that advance my work/life/career. This allows me a good night's sleep.

I have tried coworking in both co-rented offices and "office hotels" as well as work from home. For a long while I had a (pretty expensive) seat at a coworking space that I didn't use, but just knowing that I could leave my isolation at home and go there meant it felt less isolated.

I wish I could work in cafes, at friends etc, but I just can not bring myself to work without a proper big screen and keyboard, which means most nomadic coffee shop setups are off limits. One day working off my laoptop and my neck, eyes and and back hurts. I need a proper desk, which makes it a lot harder to move around.

This mirrors my experience going from product support to software engineering. I moved roles in a company that was primarily a sales organization, so I still had a lot of responsibilities to other people that required social interaction. When I switched companies to join a product development team all the social interaction went away. In fact, earlier this year I went for several months without any real interaction with my co-workers, and I work in a cubicle farm. This had a severe impact on my mental health, to the extent that I'm in therapy now. I've since started seeking more interaction with co-workers during the day because I actually need it to work effectively.

People often write about software development as if it's a solitary activity, but I can only do my best work for other people. Having personal relationships with my co-workers makes me a stronger developer, and I can't do it remotely.

>But for me, it was the opposite: Id gone from management (high social interaction) to software development (lower social interaction), and from working in an office (hundreds of people) to working from home (two cats), and expected that this would all be fine.

Could be the change from bossing people around (being a manager) to being bossed around (being a dev)? Because all the rest (interaction with friends, walks, going to the gym etc) one could still have from working remotely -- like the author says they did for the first months anyway.

What I like about working from home with regards to socializing is that I get to choose when, where and with whom I socialize. I didn't have that control in an office setting. While most of my clients and colleagues are in other cities, I make sure that I have several local if for no other reason than to have some professional socialization opportunities. And of course there are a dozen groups that I could choose to participate in - actually more than I'd ever have time to do.

feeling isolated is not a good thing most of the people, that's why libraries, coworking spaces and meetups exist. After all, the human being is a social animal.But what we can do to prevent this is planning where to work from, where to go, analyzing if working remotely is good or not for us. I found this piece of content really helpful as I'm tired to read how nice is to be a digital nomad, well... it is if you're an outgoing person, or if you're the type of guy who likes to feel pushed to always go the extra mile. But we have to prevent people from just going to the middle of nowhere expecting amazing things will happen as you have to be the one who moves first.Loved this!

I love working at home. Not going out of my house for weeks on end doesn't bother me at all, and I'm much more productive when I'm working on my own. I think I could be quite happy as a shut-in. I can see how the lack of a social life would bother some people though. I guess the success of working remotely depends a lot on your emotional needs and personality.

On another note, I do have to disagree with the author with regards to making the most money in either New York or the Bay Area. Perhaps the salary looks bigger on its own, but when you consider housing costs, food, gas, taxes, and other costs of living, you actually end up making a lot less than you do in other locations. I've received multiple offers from the Bay Area, and one or two from New York, but they just can't compete, all things considered. Plus, I don't have any desire to cram my family into a 1,000 square-foot cubbyhole, when we can enjoy seven times the space elsewhere for half the price.

Does anyone know of a valid study of how workgroup size, the ratio of meetings to individual working time, etc. compared to well-organized remote work? By well organized, I mean designed to mitigate the problems cited in this article and otherwise.

Everyone's into deep learning, but what would I actually do with it? With some other field, like computer graphics, one can fairly quickly get a 3D cube spinning on their screen and know it has some relation to the special effects in the Star Wars movie they just saw. No one makes it obvious what the hobbyist can expect to do with deep learning or how it relates to the broader world.

Don't be deceived: the reason why these plans look nice is because of the team structure of the Zelda project. The design team spent a lot of time drawing up polished graphics and layouts and then "threw it over the fence" for implementation, in waterfall fashion. This is still a common practice within Japanese teams, as evidenced by, for example, Mighty No. 9's documentary, where you can witness an entire level constructed in Microsoft Excel. [0] The separation of roles is not a definite downside if the game design is already well-understood, and American teams have flirted with big up-front design on occasion, but tend to lean towards making sure everyone stays hands-on and can test and iterate independently.

One of the stories about Zelda that appears in interviews is that the second quest is the result of a miscommunication about how much space was available: They could have had a single quest that was twice as big.

The main direct advantage of drawing everything out is that you can quickly explore different types of setups(relative scales, positioning, iconography, etc.) and do a few passes of testing on it before committing it to code, for the same reasons that one might do wireframing and mockups for application UI.

Very disappointing. Uploading a few sketches to a blog is hardly what I'd call "releasing original design doc". Where's the rest of it? What about the story? Boss mechanics? Weapons? Enemies? Dungeons?

I once took an English class where we read screenplays that were printed in a book. One of them was of the great movie "Chinatown."

The professor --- who had never worked in film-- said: "See, the screenwriter really comes up with everything. See how he thinks of every scene, every bit of dialog, that the actors and the directors follow."

Later I read a book about the making of Chinatown and learned that the script was some huge thing that Robert Town and Roman Polanski rewrote every day while they were making the movie. The version in the text book was the correct script in the order of the final cut.

Anyone who takes a class in video game design or development should be wary of those who haven't really done it.

Now that there's a bunch of AI/ML-related links in the front page, probably now is the best time to ask:

As I learn deep learning, from the practical point-of-view, I found that the idea is simply to feed some "black box" with labeled data so next time it can give you correct label given unlabeled data. In essence, it's pattern recognition. What do you think?

And then, as I try to find use cases for ML (you know, finding problem for the solution), I found that actually, many problems that can be solved with ML can actually be solved with rules. For example, detecting transaction fraud. You just need to find the right rules/formula. Forget ML, if you can't hardcode if-else, just use rules engine. What do you think?

So, I'm starting to think that ML is good for solving problems where (1) we're too lazy to formulate the rules, or (2) the data is too complex/big to analyze by rules (as in, understanding image or voice). What do you think?

I also have a question regarding ML. Are there resources where I can see how could I treat video sequences (series of images / spatial/temporal continuity) as inputs? Trying to find a starting point for learning and use case I have in mind.

I owe a great deal of my own personal success to Patrick McKenzie and Thomas Ptacek, both of whom have been steadfast, consistent and generous advisors (both in public comments and in "hey can I bounce this off of you" emails).

After following Patrick's writings and stories for a number of years now, I can confidently say that his relentless transparency has been one of the greatest gifts I received in the industry. His advice may not strictly work for everyone in the literal sense, but I believe that diligently attempting to use his suggestions as a template is, itself, a highly productive exercise in programming and business.

There is one particular note I want to make about patio11's success: Patrick is a phenomenal marketer with remarkable business savvy who happens to be a programmer. He is not primarily a programmer, which is evidenced by his recent work at Stripe and the work he is best known for on HN (essentially, writing about shipping software, not the software itself).

This is not to say he is not a good programmer - I simply can't comment on that, though I have reason to believe he is after seeing Starfighter's game. Rather, he leverages that skill set as a means to an end, not an end in itself.

I think this is a really important point to make because I see many people who try to pursue significant career success by e.g. ranking up on TopCoder, or open sourcing impressive software. While those things can lead to success, there is a vast, long tail of people who are very capable programmers with no recognition doing those things. Healthy self-promotion and efficient improvement/maintenance of one's technical skills has a much higher probability of success than attempting to become Fabrice Bellard.

This is demonstrative - in my opinion, the sum of all of patio11's advice can be summarized as follows: Don't be a programmer, be a $SOMETHING who happens to program, and program well.

I read these reports from Patrick with interest, but feel like he inhabits a different universe than I do.

Pinboard made $256K last year, so I operate in at least the same financial ballpark. But I do my taxes on TurboTax and have never spoken to an accountant or lawyer. My business is a sole proprietorship.

From my perspective, Patrick overcomplicates everything he undertakes with business processes and overhead. From his perspective, I'm probably a irresponsible slacker.

The upshot is that there are as many ways to run an online business as there are people, and how you do it depends as much on your personality as on objective factors. Big props to him for writing about his experience so openly, and in a way that so many people clearly find helpful.

Reading these is always interesting to me. Looking at these sorts of sites and the amount of articles and buzz they gather on Hacker News and other sites, I always assume they must be doing high six-figures in income. Yet the ones that are transparent are generally in the $300k a year or less revenue range.

I've always considered my side projects and businesses failures, but judging by these numbers I've been more successful than I realized. So I feel good about that, but I think I really need to re-evaluate my goals and how I pursue them. Because I've been extremely negative about what are apparently successes.

Perhaps I also need to be more open to hiring a broker next time I sell a project.

Patrick never ceases to deliver on value to this community and many others. Even though this is a story about how he latest startup didn't go as planned (he's now full time at Stripe), he drops knowledge bombs for us all to learn:

- knowing when to move on to something new (Appointment Reminder -> Starfighter) when he didn't have the 'fire in his belly'y any longer

- financial planning using a simple spreadsheet: the retirement fund!

- when to borrow money and how to calculate risk

- when to join a company (rather than start something new)

- the value of personal leverage (personal and professional development)

- when and how to sell your startup

- the trials of shipping (six weeks became three years)

- setting goals for the future

- the value of family

I've added Patrick's year in review to a few other bootstrappers and solopreneurs that I have felt are helpful, instructive or inspiring:

I'd love a series of interviews with the people you've dealt with over the years at your Ogaki bank and their perceptions and misconceptions about your account and the transactions that go in and out of it.

I'm a bit surprised that he was running Appointment Reminder as a sole proprietorship. My business ventures have mostly sucked, but I've always found the separation of personal and business concerns afforded by proper incorporation to be invaluable, and not expensive at all in the big picture. (This was in Finland; I imagine Japan could be very different.)

On the other hand, I guess that makes AR a fine example of a company that would have benefited from Stripe Atlas if it had existed back then? :)

(I realize this is rather tactless, but I ask in the interest of fairly weighing the risk/rewards of startups/consulting vs. big tech)

Is it fair to read this to mean that patio11's net worth is < 250 k$? (Based on the fact his liquid assets were negative prior to the sale of AR, and said sale netted less than the price of a house in Tokyo, median price: ~250 kUSD.).

Great write up. Does anybody have more information on the attack near the end "...defrauded by Lithuanian hacker gang which figured out how to use our application to proxy a telephone call through Twilios phone number verification feature to a phone sex line in the Caribbean..."

Maybe the time and effort demands of both wouldn't have allowed it but given how Patrick's personal debt grew, it seems it would have helped to do at least a little bit of consulting while he was working on Appointment Reminder.

Doing freelance / consulting work on the side while bootstrapping your product to profitability seems increasingly common now. Though this is definitely more sustainable in locales with lower costs of living while you also don't have to provide for a whole family.

As a counter-argument Linear regression to ML is "goto statement" to programming.

Linear regression looks great on paper since you can derive residuals, slopes compare the individual "effects" etc. But thats unnecessary and in some cases wrong when the goal is mere prediction and not explanation. The big difference between ML and statistics is that latter selects a "correct" linear model and then assumes a distribution for "errors" due to pesky reality. The effects are used for explanations (538 Nate Silver style wonk/punditry). Machine Learning on other hand tries to predict as close to observations as possible without imposing a model or caring about an explanation.

The simplest introductory Machine Learning approach should not be linear regression but rather a 1- nearest neighbor model.

E.g. rather than giving data about house prices and square footage. The question should be "How do you predict price of a house in given location?". "What are relavant features?" (location,location, location,school district,number of rooms,sq ft, etc), "How would you collect labels/data". (Zillow, exclued prices older than 2-3 years).

The simplest answer would be that the price is same as that of the neighboring house (closest lat/long) with similar sq foot sold recently.This can then be implemented as a weighted distance metric and tested using Leave one out cross validation (I know not the best metric). But consider how Nearest Neighbors allows us to incorporate location information in a natural manner. That is very important and cannot incorporated in an elegant manner in a linear regression model.

A big part of ML is applying different set of methods across several domains. Thus for beginners Teaching ML should not be about teaching limear models or gradient descent but rather how do you start thinking from ML perspective.

The whole "machine learning is just fancy statistics" discussion that happens on Hacker News endlessly is often pedantic semantics. However, in the case of linear regression, this is basic statistics that is an analysis life skill and has many practical applications outside of the hardcore TensorFlow blog posts. (case in point, I first learned linear regression during my undergrad in a "Statistics for Business" class)

The article is missing a bit, since Word for Mac (1985) was more the model for the eventual Windows 1.X version, but isn't even mentioned in the article. There were also versions for other "windowed" OS's during the same 5 year period before Windows was sufficiently viable to make it work. People often forget that new apps appeared first on the Mac until around 1990 or so when Windows 3.0 shipped (Word appeared a bit before); basically MacOS was much more advanced than Windows up until that point. After Windows 3 the first platform flipped completely to Windows. I shipped my first MacOS app in 1987.

I remember the joy of writing a 200 page book in FrameMaker on a smallish SPARCstation in 1994. Even in 1998 it was easy to convince my boss to license FrameMaker for Windows as writing software. Word was still too buggy to write anything exceeding a few pages or with embedded images with it. Sadly, Adobe never marketed FrameMaker to a mass market.

Word 1.1 is the catalyst that got me to try Windows. I was familiar with word for DOS at the time, and the potential of proportional fonts and OS-level printer drivers caught my imagination - it was the future, I was sure. It didn't hurt that I got a promotional copy for cheap with Windows bundled. I still have the floppies around somewhere.

Many years later, and I'm still writing Windows programs. Excellent strategy on Microsoft's part.

Nice to see this hugely transformative software available in an archive.

> To access this material you must agree to the terms of the license displayed here, which permits only non-commercial use and does not give you the right to license it to third parties by posting copies elsewhere on the web.

reading comments gave me a illusion that only the smartest people deserve to use Word. seems I can get a Mensa certification on using every editions of Word well without too much learning.....God I need to test my IQ, should be 9999.

We use GSuite (Google Apps and Email) at work. Though I have MS Word installed on my machine, I find that I increasingly use Google Docs for most of my document needs. Only when I need to something that just can't be done in Google Docs, I open MS Word. This is becoming increasingly rare.

I mainly use MS Word to open documents shared by Clients and to check the resumes of candidates which are generally in MS Word format in India.

We reverse engineered the IOC's included in Thursday's report from the FBI that released malware data that is supposedly associated with the 'Russian' election hack. Turns out it's a hacking group in Ukraine, anyone can get it for free (but if you're nice you'll donate to their BTC account) and the DHS and FBI sample was several versions behind.

The trouble is that the report was released at the same time as the expulsion of 35 Russian diplomats and the whole context around it, including some of the language used in the report, implies it's proof of a Russian election hack.

We also analyzed the IP's they shared and they're just a mish-mash of known attack IP's around the world - probably hacked hosts being used as an attack platform by everyone. ISP's include Linode and Digital Ocean.

> There was no penetration of the U.S. electricity grid. The truth was undramatic and banal. Burlington Electric, after receiving a Homeland Security notice sent to all U.S. utility companies about the malware code found in the DNC system, searched all their computers and found the code in a single laptop that was not connected to the electric grid.

"and found the code in a single laptop that was not connected to the electric grid."

So, the first step in penetrating a system was accomplished, getting the code onto a device that could potentially (or so they attacker may have hoped) be connected to the target network.

Until I hear that the code was put on the laptop by its owners intentionally and for legitimate reasons, this sounds like an attack. The headlines and responses are arguably alarmist and not fully informed, but it's still an attack. The dismissal of alarmism seems intended to obscure the likelihood that there was, in fact, the start of an attack.

If a spear phishing attack fails, was it not still an attack? That it was an attack in the direction of the power grid is, by definition, alarming. [EDIT: The first sentence in this paragraph confuses my point, and can profitably be ignored.]

The intercept's article could have been less sensationalist itself, and I wonder what the motivation for the overdramatization of the Post's failure would be. Competition? Schadenfreude? Sensationalist link baiting?

Regardless, I had hoped for a more sober and professional style from the intercept from its early days, and I've long ago stopped reading it, modulo the odd HN post.

> Editors Note: An earlier version of this story incorrectly said that Russian hackers had penetrated the U.S. electric grid. Authorities say there is no indication of that so far. The computer at Burlington Electric that was hacked was not attached to the grid.

This is journalistic ethics in action. WaPo has publicly admitted a mistake and revised their article as a result. Greenwald can (and deserves to) give himself a pat on the back.

That being said, I am disappointed in his bad faith equivocation of the (occasionally sloppy and partisan) news media with "news" that is patently false and engineered to maximize advertising revenue. Calling this "fake news" just gives the GOP more (dishonest) ammunition in its 40 year war with the Post.

Fake news isn't a new phenomenon, in the 1870s a satrical/comedic article in a New Zealand newspaper about an impending Russian invasion led to such wide spread hysteria that the colonial government almost bankrupted itself. To sate the public it had to invest heavily in naval vessels and build 17 forts to fight off the (non existent) Russian menace.

It's a wee bit hypocritical for the US to get so upset about these things though, considering all the elections that the CIA have been involved in, not to mention the stuff that Snowden revealed (like tapping the German Chancellors phone). Everyone knows that whatever espionage Russia is doing to the US the US is doing back in kind. All the powers will be hacking each other.

This is what happens when the majority of journalists both have a profit motive and cozy up to the establishment: they'll say anything and a low/no-information populace gobbles it up without a grain of salt.

The Intercept, Democracy Now, Thom Hartmann, TYT, et. al. are in a precarious position because they often speak the truth, which is inconvenient to those in power. Whether they can mostly survive and measurably supplant establishment media by demographics isn't certain. Whether Trump will target investigative journalists and net neutrality (likely) Erdogan-style is anyone's guess.

What is the boundary between what we consider "fake news" and news with a tiny kernel of truth somewhere in it (in this story it sounds like a semi-related laptop was infected with some malware) that is sensationalized to claim something much broader? I think that there are some pieces of news (e.g. meme-news that people post on social media sites, that would be similar to what one might read in a tabloid) that get automatically rejected by my BS filter a lot easier than something like the piece mentioned in the article, which was posted by a respectable journal.

I'd like to see better evidence of the US election being hacked, but I understand they wouldn't want to release anything that could cut off their ways into Russian systems. I don't know how anyone expects to get real proof of it without us deciding to give away strategically important gaps in Russian infosec.

Isn't malware on some worker's laptop a common way of penetrating disconnected networks? Not that it matters, as it serves the agenda equally well being either a "false" or a "true" story. Seems like calling Russia out on covert operations was too scary for them, so they chose hacking as a more acceptable thing.

>Burlington Electric said in a statement that the company detected a malware code used in the Grizzly Steppe operation in a laptop that was not connected to the organizations grid systems. The firm said it took immediate action to isolate the laptop and alert federal authorities

This is why every time there was a post about "banning fake news" on HN, I specifically gave WashPost as an example (knowing they've written pure propaganda/false stories in the past) and questioned "whether a site like WashPost would have its fake news articles blocked on Facebook, too", when they are caught manufacturing stories (which they arguably did here).

Because if such articles from the big media companies wouldn't be blocked, then the system would be biased and unworkable, and Facebook or Google will just find a lot of backlash against them over it.

The Washington Post, is a national media institution who has had their press credentials revoked by Trump. Therefore the organization, by definition, at a disadvantage, when it comes to gaining CONTEXT about the subject of their reporting. Thus, The Washington Post is mired in CONTROVERSY. This is only natural.

People misinterpret each other's text messages and internet comments, often with CONTROVERSIAL outcomes, because the initiator of the message has failed to provide sufficient CONTEXT. This is only natural.

The entire aviation industry vilified Captain Sullenberger, even though he had just saved 155 peoples lives, because everyone investigating the incident lacked sufficient CONTEXT to explain to themselves, and each other how Sully was able to accomplish something that had never happened in the history of aviation. Captain Sullenberger did, in fact, possess sufficient CONTEXT, which he gained over a long career of landing other failing airplaines. This CONTEXT possessed by Sullenberger, at least as portrayed in the movie, is written all over Tom Hanks face in the form of a stiff upper lip. That guy was as cool as the other side of the pillow the whole time, before, during, and after his water landing. Once sufficient CONTEXT was provided, the CONTROVERSY immediately subsided. This is only natural.

The United States of America is at a fairly CONTROVERSIAL point in its history. I wonder, if American's sought out the true CONTEXT of the people they find the most CONTROVERSIAL, their political opponents, if said CONTROVERSY would naturally subside.

Find someone you disagree with, and see how long you can keep talking to them.

The really shameful part of this is the xenophobic garbage spewed by Democrats who are upset their candidate lost the election. They can't handle the thought that they simply lost, so now anyone who disagrees with them is an agent planted by Putin.

Is anyone else concerned that this means IT can choose to hold back the version of Chrome in their organizations? Auto-updating Chrome has been low key one of the best solutions to the pain of backwards compatibility with older browsers. In the past we not only had to worry about compatibility between browsers, we had to worry about compatibility between browser versions. Further, auto-updating Chrome as dramatically reduced the time from new web feature implementation to widespread deployment and thus usability. I take it that turning off auto-updating will not be widespread, but I'd rather not risk it

One notable thing about the dominance of Chrome (and the decline of Firefox) is that WebKit is now a de facto standard. The desktop and mobile versions of Chrome and Safari are based on it, and according to Microsoft, "any Edge-WebKit differences are bugs that were interested in fixing." (https://en.wikipedia.org/wiki/Microsoft_Edge). That covers every major desktop and smartphone browser except Firefox.

At some point, developers are going to target their stylesheets for WebKit only, because Firefox rendering differences are going to be seen as nuisances that aren't worth overcoming in order to reach a tiny minority of users. Firefox will have to work toward WebKit compatibility as Microsoft Edge does.

WebKit is doing pretty well for something that was originally part of KDE.

It's good to see that Chrome's auto-updates can be turned off and updates pushed manually to users at a time of the IT administration team's choosing (I had to dig down into a few links to find this information). But it seems like there's nothing equivalent to Firefox ESR [1] in place, and Chrome would continue to update with the same frequency as the general public release (of the consumer focused version). Does anyone know if a longer term security-only-updates model is available with this like Firefox ESR (just for the sake of curiosity)? When I searched online I found a two year old reddit thread that indicated there wasn't one.

Chrome is the new IE. Enterprise apps are being developer with the assumption that they will only even be run on Chrome, and become fragile because of that assumption. A recent example: someone has set up a race condition of timers that was only working (i.e. resolving in a specific order that is needed for the app to work) in Webkit-based browsers. No one cared to fix that, because it did work in Chrome, and that's all that's needed.

I think most interesting part of all this is that they're offering support for this[1]. So that and it now being a part of G suite is making it an 'official' service/product.

And since Chromium browser is an open source browser which receives many contributions from many developers[2], this will add Google to the list of companies which take contributions from OSS and make money off it.

Please correct me if I'm wrong, but I'm not aware of any other OSS product which contributes to their revenue in a direct way. Only thing that comes close is Kubernetes and then Tensorflow, but both aren't in the same 'level' that chrome is in now.

My CDMA phone dropped service for a few minutes after the leap second.

It's absurd that we continue to keep subjecting ourselves to these disruptions and the considerable amount of work that goes into handling leap seconds for the systems that aren't disrupted by them.

Leap seconds serve no useful purpose. Applications that care about solar time care usually care about the local solar time, while UT1 is a 'mean solar time' that doesn't really have much physical meaning (it's not a quantity that can be observed anywhere, but a model parameter).

It would take on the order of 4000 years for time to slip even one hour. If we found that we cared about this thousands of years from now: we could simply adopt timezones one hour over after 2000 years, existing systems already handle devices in a mix of timezones.

[And a fun aside: it appears likely that in less than 4000 years we would need more than two leapseconds per year, sooner if warming melts the icecaps. So even the things that correctly handle leapseconds now will eventually fail. Having to deal with the changing rotation speed of the earth eventually can't be avoided but we can avoid suffering over and over again now.]

There are so many hard problems that can't just easily be solved that we should be spending our efforts on. Leapseconds are a folly purely made by man which we can choose to stop at any time. Discontinuing leapseconds is completely backwards compatible with virtually every existing system. The very few specialized systems (astronomy) that actually want mean solar time should already be using UT1 directly to avoid the 0.9 second error between UTC and UT1. For all else that is required is that we choose to stop issuing them (a decision of the ITU), or that we stop listening to them (a decision of various technology industries to move from using UTC to TAI+offset).

The recent leap smear moves are an example of the latter course but a half-hearted one that adds a lot of complexity and additional failure modes.

(In fact for the astronomy applications that leap seconds theoretically help they _still_ add additional complication because it is harder to apply corrections from UTC to an astronomical time base due to UTC having discontinuities in it.)

Once again we're screwed by different people wanting "time" to mean different things. There is no hope for humanity once we start traveling anywhere close to light speed into and out of the solar system.

I propose a new "non-time" time system. It has exactly two real values which range from 0 to tau and an integer, the first real number is radians of earth rotation, and the second is radians of the rotation around the Sun. The integer reflects the number of complete cycles. So lunch time in Greenwich 'pi'.

It has the benefit that its "source" is actually the planet, so we can use a telescope at Greenwich to pick a certain alignment of stars as the "zero", "zero" point and then each time it realigns to that exact point, you can increment the "year" count.

I believe we can build a robust system to support this out of stone. We'll need to create a circle of stones but using a small hole drilled through a stone and a marker on the ground we can always identify 0.0,0.0, 0.0,pi/2, 0.0, pi, and 0.0, 3*pi/2.

I guessed most big services would be using something akin to time smearing [1] since the first big leap-second outages years ago. Is there any reason why cloudfare would be unable to use this technique?

I'm curious what if anything would be problematic if everything just effectively "ignored" leap seconds (i.e. would this outage not have occurred?) --- one minute is always 60 seconds, an hour is always 60 minutes, and a day always 24h. I mean, if you consider the fact that human society has managed to function perfectly well with almost everyone not knowing nor caring what a leap second is, and yet apparently some software does --- leading to problems like this --- something doesn't feel right.

I was at a relative's and tried to load two different web sites.. my first thought was that their wifi sucked. My second was "will we finally learn a lesson today about the disturbing trend towards constant re-centralization of all our online services?"

What causes real-world problems with leap seconds is actually unrelated to the nasty interactions of metrology and solar time -- it's a specific and avoidable problem with how NTP (and many OSes/languages) represent time -- it's a types issue.

The right way for computers to represent time is with a number that represents the number of constant-rate ticks that have elapsed past a some agreed-upon epoch. If you know what the epoch is and how long each tick is (lots of people use 1 / 9.192 GHz), it is easy to know how many ticks are between any two time values, and you can convert a time value with one epoch to one with a different epoch and tick rate -- you can do everything people expect to do with time. There are no numbers that represent an invalid time value, and for each moment, there is a unique time value that represents it. There's a one-to-one mapping with no nasty edge cases.

Leap seconds are a step function that is added to a constant-rate timescale (whose name is "TAI") in order to generate a discontinuous timescale (whose name is "UTC") that never is too different from solar time. There is nothing fundamentally abhorrent about leap seconds -- there are just good and bad ways to represent, disseminate, and compute with timescales that involve leap seconds.

The right way to handle leap seconds can be seen with many GNSSes and PTP (very high precision hardware-assisted time synchronization over Ethernet). GPS, BeiDou, Galileo, and PTP all involve dissemination and computation on time values -- and with dire consequences for failure/downtime/inaccuracy.

The designers of those systems all somehow converged on the choice to separate out the nice, predictable, constant-rate and discontinuity-free part of UTC from the nasty step function (the leap second offset). Times in all those systems are represented as the tuple (TAI time at t, leap offset at t). This means that the entire system can calculate and work with (discontinuity-free and constant-rate) TAI times but also truck around the leap offsets so when time values need to be presented to a user (or anything that requires a UTC time), the leap offset can be added then. Crucially, all the maths that are done on time values are done on TAI values, so calculating a time difference or a frequency is easy and the result is always correct, regardless of the leap second state of affairs. Representing UTC time as a tuple makes the semantics of that data type easy to reason about -- the "time" bit is in the first element and is completely harmless -- the edge cases have all live in the second half of the tuple.

NTP and Unix (and everything descending and affected by those) have made the mistake of representing and transmitting time as a single integer, TAI(t) + leap offset(t). This is not a data representation that has sensical semantics and it is very hard to reason about it. First of all, the leap second offset is nondeterministic and also unknown -- there is no way to get it from NTP and there is no good way to know the time of the next leap event. Second of all, there are repeated time values for different moments in time (and when a negative leap second will happen, there will be time values that represent no moments in time). Predictably, introducing nondeterministic discontinuities doesn't work so well in the real world. There are a bunch of bugs in NTP software and OS kernels and applications that make themselves shown every time there is a leap second. It's not even just NTP clients that struggle -- 40% of public Stratum-1 NTP servers had erroneous behavior [0] related to the 2015 leap second! Given that level of repeated and widespread failure, the right solution is not to blame programmers -- it should be to blame the standard. The UTC standard and how NTP disseminates UTC are fundamentally not fit for computer timekeeping.

GNSS receivers and PTP hardware get used in mission-critical applications (synchronizing power grids and multi-axis industrial processes, timestamping data from test flights and particle accelerators) all the time -- and even worse, there's no way to conveniently schedule downtime/maintenance windows during leap second events! "Leap smear" isn't an acceptable solution for those applications, either -- you can't lie about how long a second is to the Large Hadron Collider. GNSS and PTP systems handle leap second timescales without a hitch by representing UTC time with the right data type -- a tuple that properly separates two values that have the same unit (seconds) but have vastly different semantics. The NTP and unix timestamp approach of directly baking the discontinuities into the time values reliably causes problems and outages. The leap second debacle is not about solar time vs atomic time; it's about the need for data types that accurately represent the semantics of what they describe.

Are there any public "skewing" NTP pools that distribute the leap seconds as lag / gain over 24 or 48 hours as some of the large providers do? That seems to be the generally accepted answer to leap-second chaos, and certainly seems simpler than all of the hidden bugs in systems all over the place trying to deal with :60 on a clock.

The proposed idea of relying on undocumented internals and blacklisting attribute names to securely sandbox formatting strings is _really_ _dangerous_. Never do that in production code! Language expansions could render your sandbox unsafe at any time.

To sandbox this, we just have to walk a list and enforce some rule. For instance, the rule might be that all elements after sys:quasi must be string object or else (sys:var sym) forms, where sym is a symbol on some allowed list. Thus (list foo) would be banned.

A custom interpreter which calculates the output string while enforcing the check is trivial to write as a one-liner.

Of course if your program just eval such an untrusted quasiliteral, it has access to the dynamic/global evironment:

This is actually a problem with a lot of languages that allow runtime template like String interpolation.

For example Groovy on the JVM has GStrings which one can do fairly nasty things.

As well it actually is fairly hard to lock down most of the template languages on the JVM for user templates. (If you are going to allow user templates I recommend one of the Java Mustache-like implementations).

If you are making a new language, the easy way to fix this is make formatting a property of your strings, not a runtime function. Ie. instead of having "foo {bar}".format(bar=bar), have "foo {bar}" be equivalent to "foo "+bar.

This sidesteps the problem because only literal strings are formatted.

I was always puzzled by Python's insistence to forego string interpolation until the latest version.

Runtime string formatting, even if done safely (e.g. .NET's String.Format() which doesn't have property access AFAIK), can still cause unexpected exceptions at the very least, and suffers from inferior performance.

Yeah, I never liked Python's "new-style" format because of this. It didn't occur to me that you could use field access to access globals (never done enough Python metaprogramming to mess with the reflection stuff), but I was afraid of arbitrary getters being invoked.

In general I'm very wary of runtime string formatting. Strings tend to be untrusted input with a large degree of freedom. format strings are almost always known at compile time (and more trustworthy). If your interpolation system is more than simply mapping keys to values or positions, you should probably restrict it to compile time. Feel free to expose a harder-to-use runtime API. Rust has compile-time format strings, for example. They're not as powerful as `str.format`, but they could be without there being security issues. JS has a different syntax for format literals. Regular strings cannot be "formatted", you must specify a string literal with backticks and that gets converted to a string value when the interpreter gets there. These literals can execute arbitrary code, but since it's just literals there's no way for an untrusted string to get in there.

One main use case for runtime string formatting is i18n. But that really should use a different solution. Most string formatting APIs are geared for programmer convenience -- the programmer is writing the code and the string. The scales shift for translators, who are only writing the strings. They don't need things like field access and stuff.

Another use case is template engines and stuff like that. In that case, field access is useful, but you probably should exert more control on these things (which is exactly what jinja2 seems to be doing here)

At one point I toyed with an idea for a super-type-safe template engine in Rust. It would validate the templates at compile time, and additionally ensure that the right types are in the right places. For example, it could ensure that strings that get interpolated with the HTML are either "trusted", escaped, or otherwise XSS-sanitized (using the type system to mark such types). Similarly, url attributes (href, etc) can only have URLs that have been checked for `javascript:`. Never got around to writing it, sadly.

So is the issue that it's a problem to let the format string be controlled arbitrarily? It's good to warn users around it because some may not know the dangers, but in general you don't trust user input, so I don't really see this as something you need to build something custom around, rather just be careful and follow good practices. It should be understood not to pass user data directly to internal code without sanitizing it. Proposing some custom code that uses undocumented internal features is overkill and also dangerous since things that aren't documented/internal can change suddenly.

3. Typically, translators are hired for their native language abilities, and not for their technical prowess. I've met precious few who knew how to open a text editor, let alone hack your product via its strings.

I worked with Python and I18N/L10N for about 15 years. The way I always handled localization was to parse all our strings into a PostgreSQL database, and then provide a web interface for translators to do their work. This interface provided translators with the full-context of the strings they were translating, which internal strings often don't, prevented the inclusion of certain characters and keywords, and kept the translators from screwing up the formatting. By doing it this way, we got much better translations, and our internal strings were never out of our control.

I do love writing python, but it's pretty shocking when I find out you can write something like `event.__init__.__globals__[CONFIG][SECRET_KEY]`. That language just does not care about privacy or information hiding at all, I guess.

Doesn't this apply to any language that has string interpolation, Ruby, Python, Javascript Perl, etc? And doesn't it not really matter, because it's not realistic to use a dynamic string template in a program?

233 countries? It would be much better to organise the data in some sort of heirarchy, given having the UK, Wales and Scotland all on the list is somewhat confusing (and it leaves out US and Australian state legislatures).

Also, I'm wondering how the data was collected - the party affiliation information for the Australian parliament is very strange. Not entirely wrong, but probably misleading.

That's an awesome project. Some civic tech initiatives promise to bring transparency on representative activity/lazyness, vote records, or transparency/corruption. This promises to unify datasets in a consistent, comparable manner. Very interesting

They don't even have current data on who is currently in parliament for many countries and in the case they do that data is essentially worthless. I really don't see the point of this.

If you want to bother at all, you should have data on the level of http://abgeordetenwatch.de (for Germany only but surely similar projects exist in other countries). So how they voted, which committees are they part of, which jobs (beside being a politician) do they have. If you can get it, even which lobbyists they've met with (http://ec.europa.eu/transparencyregister/).

Thank you for such a useful project, its a good start.Ill happily contribute with data sources, also I can translate the website in other languages if requested.

And most importantly for those who live in countries with huge tax rates, next time when Ill protect my hard earned money that they try to steal as tax and inflation, Ill use the feature to donate it to this website.

Something that stuck with me throughout the article was that the concept of "you can do anything" was almost masked by the fact that he had placed all of his "apples" into one basket--Rajeev. Of course this was a different time, and I think it highlights just how important the internet and technology have become in our professional success.

Had this been present day, Henderson could have tried to make use of others through collaboration, just as Rajeev himself pointed out towards the end. Somewhere in the article he mentioned his doubts within his teacher, and that's something that I think most people need to realize. Teachers are just people with their own faults. Nowhere does it state that your teacher is going to know the answer to your success. If you continually find yourself lost and doubtful, you should extend your reach and try to seek help from other minds as well.

He was on a journey with thousands of forks within thousands of roads, and simply locking yourself in a room for 15 hours a day, essentially brute forcing different paths isn't a healthy way of going about research or anything in life.

Wow, I see how this is relevant to startups, because it's one of the best essays about grad-school and PhD research as I've seen. The people who attempt it are capable and driven, but a good advisor is often critical. There are a lot of hills to climb and the most important thing to learn is how to guide yourself when the way isn't clear! We want to change the world...

When I saw these lines I thought maybe his Advisor wasn't doing such a good job:

A year or so of research with Rajeev, and I found myself frustrated and in a fog, sinking deeper into the quicksand but not knowing why. Was it my lack of mathematical background? My grandiose goals? Was I just not intelligent enough? Or maybe it was the type of research Rajeev had me doing.

Then he moved on to a thesis and graduated, which shows that Rajeev was doing his job as a Boss and Professor. Advice about using your strengths, working with others, focusing on success and minimizing mistakes... it really does translate to most quests.

I'm sad the writer doesn't remember being happy since he starting his PhD. Choosing to make your dream your job is a dangerous thing, especially if you can't still enjoy the path. He's good at writing so I hope he enjoys that now.

A quote from this long article..---Now you know what makes theoretical physics so hard, he said. Its not that the problems are hard, although they are. Its that knowing which problems to try and solve is hard. That, in fact, is the hardest part.---As with startups, all startups are hard but knowing which one to pursue and give life is very hard.

"Shut up and calculate" was indeed not coined by Feynman. It was, in fact, coined by David Mermin in an essay he had written once.

The amusing thing is that Mermin himself had forgotten that he had coined it and claimed Feynman to be the source. Eventually, he looked into it and found the earliest reference to the phrase was his own essay! (with no reference to Feynman)

His book, Boojums All The Way is one of the most entertaining books about his adventures as a physicist.

(For those who do not know him, he co-wrote the standard text book on Solid State Theory).

As a physicist, I see people all the time wondering what to do with it and looking for a justification for all the hard work. But physics is a hobby subject. It's rim is so vastly complicated that you can push and push at the boundary your whole life and get no where. You have to do it because you love it and you have to accept the abstract nonsense of it all. I also studied math and art history so I was down to do thinks I thought were abstract awesomeness without wondering about a job. I was lucky in that I'm a software dev so I had a job anywhere but my point is that I really feel for people who follow what they love and then become disillusioned. It really sucks.

This is very dangerous advice to give a young person. But the author should have done a better job at interpreting the message. If your father tells you that you can do whatever you want, do you conclude that you can get good enough at tennis to win the US Open? No, of course not, that's absurd. But winning the US Open is MUCH EASIER than discovering the Holy Grail of physics. Properly understood in this context, the father's message meant "if you want, you can become a physicist" - and it was probably correct. The author's downfall was that he overinterpreted the promise of the message and was also too ambitious to accept the lesser reward of "merely" becoming an average professional physicist.

What wed created is called a toy model: an exact solution to an approximate version of an actual problem. This, I learned, is what becomes of a colossal conundrum like quantum gravity after 70-plus years of failed attempts to solve it. All the frontal attacks and obvious ideas have been tried. Every imaginable path has hit a dead end.

Isn't that a clue that one of the premises is fundamentally wrong? I'm no scientist but I rely on the scientific method, and questioning my assumptions when I'm stuck almost invariably proves more productive than refining my hypothesis. OK, my problems are very shallow, but nature's complexity generally seems to be the result of simple processes, elaborated and iterated. The author's description reminds me very much of the experience of painstakingly 'solving' one side of a Rubik's cube before realizing the more general iterative approach.

Very good read, and resonated with me because I had read the same new agey books at the time, went to study physics with the same "I'll find the grail" philosophy and had felt the painful blow of disillusionment, together with other blows that convinced me to leave the path much earlier than the author of the article.

Many years later, I feel that "the grail" is still the driving force behind most of my thoughts, but frankly, I doubt it is reachable by thought. Suppose someone will solve quantum gravity. I'd be very excited and curious - it would be wonderful and fascinating, but I believe any claim that "Physics is solved" that might be stated after that would be as misguided as lord Kelvin's claim at the time. Any solution would eventually just set the stage for the next grail chase, with more food for the mind to chew on from an infinite supply and no answer will really make a dent in the armour surrounding the question of what is the essence of this food supply or it's relation to the thoughts that contemplate it. I can't prove any of this of course...

I went in to University (not physics) with the same stars, but I'd read far more history of science. I knew that almost everyone who tries to make something more than an incremental discovery fails miserably, I just thought it was very honorable to make the attempt, if you thought you might have what it takes.

I left because, when I looked around after many years, it was very clear that (where I was) absolutely none of the professors around me had any intention of solving the problems they were paid to discuss, nor any interest in doing so. They were quite capable of becoming angry at any sign that others did. So even if I did want to solve the problems (which I still did) hanging around them would be more hindrance than help. They wanted the prestige, they wanted to cash the people's checks - just so long as they didn't have to do the job, because it might pose some small risk that reputation, and affect the size of their wine cellar.

Rajeev's eventual answer had to more to do with reputation than big problem-solving, he may have been a functionary when push comes to shove, as well.

One thing I got from this article is that the art of doing science takes years to develop. Developing a taste for what is good research and a direction for what is a good path only comes from an apprenticeship model where you copy and learn from your mentor. It really shows how important taste, guidance, and perseverance is in order to avoid getting lost or distracted.

"Shut up and calculate" hasn't produced much in the way of concrete or practical results compared to the heyday of fundamental physics in the first half of the 20th century that produced quantum mechanics, special and general relativity, the atomic bomb, etc. It has produced extremely complex mathematical systems like string theory that seem to have led nowhere.

Quantum mechanics is probably "incomplete" as Einstein argued. Hence attempts to unify general relativity and the current quantum theory are likely to fail, as they appear to, since a revised quantum theory is needed.

If the data -- angular velocity distributions of start etc. -- used to support "dark matter," "dark energy" and other patches to the prevailing theory of the Big Bang and cosmology is in fact evidence that Newtonian gravity does not apply at galactic scales and above, then general relativity is not correct at galactic scales and above. Again this would make unifying the established quantum theory and the established general relativity theory incapable of matching observed reality.

The ubiquitous lack of secure longer term jobs like Einstein's civil service job at the patent office -- he was not a post-doc -- make deeper conceptual analysis of the outstanding problems in physics today difficult, probably impossible.

It seems like academia needs a "20% time" thing like Google. You can get a grant for doing cyclotomic fiber bundles in a single dimension, because it's mathy and publishable and not too far from the mainstream, even if the likelihood of this being The Grail (or real in any way) is low.

You can't get funding to look at something completely off the cuff. Even 100 years ago Einstein couldn't have gotten funding to investigate some idea that distorts distance and time. I think 20% time to investigate whatever crazy idea you want would be beneficial to making more substantial progress in the real fundamental problems.

tl;dr: success (in many walks of like, as in science, especially in abstract branches like theoretical physics and mathematics) is simply not quitting, and has almost nothing to do with winning big (Nobel, Fields, Abel). And those who stay long enough gain tenure.

It's not glamorous, it's shitty. Long hours, low pay. But you do science. And no one ever can take that away from you, which is nice.

This is nice list and ability to implement these algorithms certainly won't hurt.

But I have to say that knowing these algorithms alone won't help you much during job interview with smart employer like Google.

The reason is simple but often overlooked by many people: the most important thing is not these algorithms themselves but ability to recognize them in problems.

You may learn pretty quickly how these algorithms work and implemented but it may take years of practice to earn ability to recognize them.

Google won't ask you directly to implement Dijkstra algorithm. They may give you a problem which on surface have nothing to do with graphs. It may take a while before you actually have a light-bulb/aha moment when you realize it's a graph problem.

In practical non-interview problems, ability to recognize algorithms is much more important than knowing their implementation. You can always find their implementation on the Internet after all.

This site is EXTREMELY popular in India, and used a lot by students AND interviewers (I know many who simply ask questions from the front page of G4G on a given day). It's the inverted tree equivalent in India.

I'm someone who was actually interviewed by GeeksForGeeks because a junior from college connected them to me (They do interviews with people who have gotten placed in * dream * companies... not my terminology).

In the interview, which was done over email, I actually mentioned that resources like G4G are bad resources for studying because they over-simplify algorithms and reduce them to silly proportions, and also encourage rote learning. To my surprise, they directly published the same ON THEIR SITE. Speaks volumes of their editorial team (?).This article too has little basis in reality, but more of one guy's list.

I strongly suggest you use much better resources for learning algortihms, rather than this site, which is (by and large) the W3Schools of algorithms/data structures.

It's a pretty pointless thing to ask something like solution for linked list and etc. Either you know it, or not, and if not, coming up with solution that took others many years to come up with is like asking to invent something on the spot - i.e. it's practically impossible. So it's a ridiculous kind of question and doesn't show anything useful about the candidate.

I genuinely want to know where people use these algorithms in their code. I'm a non CS dev to begin with so may be I don't know where to use them since I didn't get formal CS education. This way of interviewing is not what I prefer. I have been told I write better code than my CS grad peers but I have no clue about these algorithms and data structures. What do you guys think about this form of interview?

In my opinion, it's a settled question now. If you are looking for a job, you better cram these lists or you are dead meat. The screening tests and the interviews basically boil down to these set of questions for most of the companies.

This list is very funny, straight out of the 90s, because we are in 2017 and most developers just spend their working days basically writing forms and storing/fetching data over the network/in a database.

Isn't the idea of preparing for software development interviews ridiculous?Instead of improving my algorithms skills to become a better developer I find myself memorizing a ton of problems just so I can answer similar ones during interviews. It feels like I'm preparing for the SATs again.

I think I've only ever had one of these algorithms come up in a hiring situation - unless I have brought them up myself in the normal flow of conversation.

I had to solve a problem for which a binary search was the correct solution (as well as some caching of results and stuff [although that part was fancy show-off stuff and not really necessary to solve the problem satisfactorily]) but I did the caching first and then sort of froze when it came to binary search because I was thinking 'uhm I should describe my thinking here first' and then the developer who was in charge of the exercise took my hesitation as not knowing the solution so he finished it and that was that (the test was also in Python a language I don't know that well - the theory was if you could figure your way through in Python despite not knowing it you would be able to handle new situations with aplomb. So I guess I failed the aplomb part.)

But these questions should be a signal to you as an interviewee. Unless they are extremely salient to your proposed job (e.g., you are applying for a position teaching algorithms) they're a sign the recruiting effort at this workplace is not very healthy.

What that means is that talent and skill will be erratically dispersed throughout that organization. Requests for new staff will take a long time and may not fill needs, and that often times specific managers strongly influence who can get hired where for a variety of reasons.

Personally, I play along with these questions but make a game of pointing out how incredibly synthetic and unrealistic the conditions people put around them are. The goal of the game is to basically force the interviewer to come out and say exactly what algorithm they want, by way of how many other aspects of real-world software and systems they want to exclude from the conversation.

What a typical, uninspired, and pretentious list. I've recently started to opt out of interviewing people because I'm often teamed with someone that will Google one of these, think they're some sort of genius, and proceed to make some poor twenty-year-old feel like a doofus for not knowing the algorithm for a convex hull.

Speaking of which -- seriously? The only time I even had to LOOK at that algorithm was when I read a very old game programming book -- before I even went to college, mind you -- and generating pixel-perfect collisions for arbitrary polygons was one of the chapters (the game example was one of those meteor blaster clones).

I have strong feelings about this and I think this article is a complete waste of time, not to mention lazy (it looks auto-generated, anyway) because it perpetuates the idea(l) of making the software engineer interview process as arcane and difficult as possible.

Nobody needs to be able to code these in an interview. Ever. For certain domains you should be aware of them and be able to look up decent implementations. But to think that level of knowledge is important in an interview is bogus. I could just as easily ask similar questions and weed out most CS grads that get into Google or Facebook with these:

Please implement a first order low pass IIR filter.Tell me how the butterfly pattern in an FFT gets your from N^2 to N*logN. Oh, and implement an FFT.Write a basic PID controller implementation.Tell me how you'd handle a Field Oriented Control system that needs to run in voltage limit most of the time - what stability issues may occur?Write a fixed-point implementation of the sin(x) function.Implement a 2-pole 2-zero transfer function. For bonus points do it in fixed point without rollover or saturation problems.Assuming you have a matrix library available, give me the boilerplate code for a Kalman Filter.What kind of ODE solver should you use for long term stability when simulating planetary systems?

These are similar difficulty questions from a different domain, but many of them are likely to be used far more often in that domain than any of the interview questions in TFA are likely to be used in their domain.

The goal of an interview is to ascertain weather the candidate is capable of doing stuff and learning stuff, and if that's likely to carry over into the stuff you need done. It's not to see weather they can produce an answer to some specific problem on the spot. How you do that I'm not telling - it's hard enough without helping you find the people I need ;-)

JFC, that's such bullshit. You want the mundane? Go for it. You actually want someone who can take a bunch of real operational data and solve the problem when the Person With The Money says, "We need to know what is actually happening with ____. And we've promised it in two days. You're it."

Who gives a flying fuck about writing the best sorting algorithm? "sort" works just fine, unless you're Google and microseconds matter. And then, you're really at the edge of R&D. You need to be able to manipulate data with aplomb. You need to be able to write an algorithm that works, and then refine it to make it go a hundred (or more) times faster once you understand why it is so slow.

This doesn't go far enough. That there is a link between high cholesterol and heart disease is only a hypotheses, not a scientifically proven fact. Lowering cholesterol does not necessarily lower heart disease. Read more here:http://www.nytimes.com/2008/01/27/opinion/27taubes.html

Because the link between excessive LDL cholesterol and cardiovascular disease has been so widely accepted, the Food and Drug Administration generally has not required drug companies to prove that cholesterol medicines actually reduce heart attacks before approval.See: http://www.nytimes.com/2008/01/17/business/17drug.html

Wouldn't it be nice if there were more government warnings and less restrictions / regulations?

Imagine if the FDA, instead of blocking new drugs for 10 years and $1B, it simply withheld its endorsement until satisfied by the clinical trials. Consumers could then take the government's recommendations into consideration when making a decision and drugs could get to market much faster.

The history of medical reversals -- and in this case, nutrition reversal -- shows that the government isn't magic.

A whole raft of restrictions could be converted to warnings and recommendations, freeing up industry to innovate and consumers to take a little more responsibility for themselves.

Imagine the history of the past few decades if the state had outlawed any foods with more than X% cholesterol. Or trans-fats. Or any of the other food fads over that time. It would have been terrible, especially now that the recommendation is reversed. The whole time, consumers were allowed to factor government warnings into their decisions, but food producers weren't breaking the law by selling foods with (X+1)% cholesterol.

Unfortunately it does not highlight the danger of sugar & processed food. Someone can follows these dietary guidelines and still be able to have a high intake of sugar, HCFS, processed food and all sorts of food chemicals (emulsifiers etc.), all prevalent in the american food industry, and get cardiovascular disease.

<<< The new view on cholesterol in food does not reverse warnings about high levels of bad cholesterol in the blood, which have been linked to heart disease. Moreover, some experts warned that people with particular health problems, such as diabetes, should continue to avoid cholesterol-rich diets.

<<<The greater danger in this regard, these experts believe, lies not in products such as eggs, shrimp or lobster, which are high in cholesterol, but in too many servings of foods heavy with saturated fats, such as fatty meats, whole milk, and butter.

Here is my paraphrased takeaway:

Cholesterol you see in your blood results is still bad. Whole milk, butter, and fatty meats is still bad. Foods like eggs, shrimp, and lobster might be good.

I don't think this changes any of my mental models. The foods that I always thought of as "probably not great" are still classified as such, according to this article.

Youtube is where most people get their dietary advice now. The most influential diet advice is coming from young attractive healthy looking people. Whatever they're eating seems to be working. Obviously most of these people won the genetic lottery, but they've also nurtured their body correctly with food and exercise.

This seems like a much better approach in convincing people what to eat anyway. Look at the results and imitate healthy people if you want to look and stay healthy.

> The greater danger in this regard, these experts believe, lies not in products such as eggs, shrimp or lobster, which are high in cholesterol, but in too many servings of foods heavy with saturated fats, such as fatty meats, whole milk, and butter.

Translation: the lobbyists for various polyunsaturated "edible plastics" are currently in the lead.

Hmmm this was just on the homepage then quickly removed.. top of the page then boom gone? Any reason why this is?

Also I've been on statins since 37 after feeling my heart race and not being able to catch my breathe after being with girlfriend at the time. That was some scary stuff and after the statin and change of diet I no longer have those type of bouts anymore. So maybe it's my change in diet (cut out 75% fried food and sugar intake is 50% less) and the statin combined and or maybe the statin is just a placebo and my change is diet helped eliminate those heart racing attacks?

I had a lot of those attacks from 37 to about 39...im now 41 and haven't dealt with any such attacks unless I eat a say Five Guys or In and Out.

My surgeon friend told me medical studies are like the bible. You can find a paper that says something is bad for you and another that says it is good for you. So just believe the one that makes you happier.

For those of you who still believe it's OK or even good to eat a lot of saturated fats, if you look at the studies it's not much of a controversy:" Whether saturated fat is a risk factor for cardiovascular disease (CVD) is a question with numerous controversial views.[1] Although most in the mainstream heart-health, government, and medical communities hold that saturated fat is a risk factor for CVD, some hold contrary beliefs." https://en.wikipedia.org/wiki/Saturated_fat_and_cardiovascul...

Anecdotally, I've read 50% of people who go on Keto see a huge 2-3x increase in triglycerides and LDL-P (Particle count as measured by NMR) - the #1 risk factor for cardiovascular disease. More LDL particles bouncing around in your arteries = bad. People with a condition that makes them break down more LDL have much less artheroscleriosis:http://www.nejm.org/doi/pdf/10.1056/NEJMoa054013People with a condition that makes them have more LDL particles get more artheroscleriosis: https://en.wikipedia.org/wiki/Familial_hypercholesterolemiaRead Peter Attila's exposition that goes in detail (eating cholesterol is fine though): http://eatingacademy.com/nutrition/the-straight-dope-on-chol...Keto could work, but it's a hyper-pro level high-risk diet requiring frequent blood work and still avoiding saturated fats keeping them primarily monounsaturated which makes it very hard to follow. Plus it sucks for weightlifting..

Surprisingly, very low fat diets might be great for you, it might not be the absolute macronutrient composition that matters, but rather the specifics of the nutrients (GI, fiber, other nutrients etc) and your genetic makeup: https://deniseminger.com/2015/10/06/in-defense-of-low-fat-a-...Scary correlations about saturated fats and neurodegenerative disease therein too.

Personally I'm sticking with a "balanced" ~10/45/45 carbs from protein, low GI carb/monounsaturated fat vegan diet - not wanting to risk side effects of any extreme (although I'm having to ensure adequate calcium, K2, D, B12 and DHA and EPA intake - If I didn't find it unethical to eat fish a pescetarian variety would likely be easier/healthier/less gassy). Tip for you vegetarians/vegans: look up low FODMAP foods; foodstuffs low in carbs indigestible in the small intestine which tend to produce more gas.

I haven't been myself but my wife is a process engineer and had to go to China to install and train people on a new machine so this is only second hand...

But one of the reasons China is so good at manufacturing is because everything is in one place (or at least clusters)... need 10,000 of an Integrated Circuit? The company that makes it is literally down the street. Need some raw materials? That's down the other street.

I can't think of a place in the US that is like that. If we need parts we have to wait for them to ship. (often from China)

It's not just a labor problem. It is my understanding to be competitive in manufacturing you need to be vertically integrated. And I think that is harder to do in the US.

Although I hope it is true. I live in the US but I don't believe in inherit United States exceptionalism but I do feel like shipping goods by container ship is bad for the environment and bad for consumers. I'd much rather see things made locally.

Couldn't one country's CEO be unreasonably bullish or bearish on their country's capabilities relative to others? Do they really know how far along foreign manufacturing schemes are evolving? Nobody is sitting on their hands here; "Made in China 2025" and Germany's "Industrie 4.0" are pretty strong desires to push into advanced production using IoT, smarter automation, and all that jazz. Anecdotally, in a factory visit near Shenzhen, the manager claimed that moving to an automated production line has been pretty easy. Some areas are kept manual only because the worker is cheaper, for now.

Outcomes from competition are hard to predict. It's interesting that Deloitte predicts Germany "holds strong and steady at the number three position" in 2020 when their own survey has them jumping between 2nd and 8th within a couple of years.

I'm pretty sure our manufacturing output exceeded China's for most of the oughts, also. The myth that "we don't make things here anymore" is, well, a myth. We just don't employ people to make things anymore.

I read the article and skimmed the full study and was left scratching my head:

- What exactly do they mean by 'Competitiveness'?

- Why so much focus on labour costs, when manufacturing cost is increasingly driven by large capital investments (especially in China, where low interest rates and government encouragement have expanded capital investments for years)

- Why all the talk about R&D expenditure. Is that really a driver of manufacturing competitiveness (whatever that means) or manufacturing output?

- How do they expect (on page 46) China's consumption to rise to 46% of GDP by 2025, if GDP will rise at 6.5% per year during the period? Such a shift would require consumption to go up by ~10% per year.

I live in europe. I'd assume 80% of the things i own are mostly made in china. Maybe 19% are made within the EU and 1% everywhere else. I cant recall a single product i own that was producted or even manufactured in the U.S. therfore i have a hard time believing that title.

Jobs that are vulnerable to automation are generally not worth keeping. That's no comfort to those loosing them though.

Science fiction from the 60's painted a world where people would lounge around in their airships hopping between beaches, and mountains and parties while robots took care of everything and the lord scientists and engineers that gifted the wold with such plenty smiled benevolently down on the citizens of the Age of Plenty. Yea that did not happen, nobody thought about who would own those automatons, and lo, it was not us.

We could just all stop buying made in China. Seriously, all these things are laws of economics. You can't bypass them, as they are natural laws. Economics is the study of human motivation. Motivation is what dictates what we do and don't, it rules our actions.

If you wanted to have an impact, be conscious about your actions, pay that extra cost and buy local if you care so much, and don't buy at all if there's no local options.

Buying instagram was a brilliant move. It's a far more enjoyable social network to use. Lately I've gotten into a scene that's basically a parallel universe where facebook doesn't exist. Everyone uses instagram. It's far more entertaining and creative, and far less saturated with anger and activism. I've dialed back my facebook activity heavily in the past year, to the point that I've deleted the facebook app. I have messenger and the facebook events app, but the losing the news feed has been no loss at all.

I can see a future, not even very far away, where facebook is essentially the AOL of our generation and having an account there is a punchline.

I admire Mark Zuckerberg's confidence to pull the trigger on an acquisition like this, especially doing it without consulting his board. My initial thoughts was that he has a visceral feel for the rate of growth that makes a social network successful, having gone through it himself. He could probably tell just by their publicity of growing to a million users within 2-3 months that they were going to be huge.

Snapchat I actually get, since it's presenting a new kind of communication model. Facebook I get. But Instagram is just like any old image gallery, with a very rudimentary comment system and almost no features unrelated to image uploading.

The purchase of instagram was a pre-IPO move to prop up Facebooks offering as a mobile company before their core product had actually transitioned. Investors major question was if FB could transition from desktop to mobile at the time, and Mark needed something to back that up. Mark had been quite vocal about Facebooks guesses around mobile, html5, apps and how he needed to reorganize the product teams but couldn't get the story together in time. Thus instagram.

The logic in both the article and most of the comments seems to utterly forget the context of which FB was in at the time.

I hate the way Instagram allows people to sign up and use the service without verifying the email address they used to sign up with. I'm assuming this is the case after someone used my email to create a profile on Instagram.

I started receiving notification emails from Instagram a few weeks ago, which I ignored at first thinking they were fake. On closer inspection they actually were from Instagram, so I clicked "forgot password" on the Instagram website, reset their password, logged in and deleted all their content and permanently deleted their account. By the looks of it, the profile belonged to some kid - a few family photos and so on.

It's quite slack of Instagram that this is possible. They should not be allowing people to create profiles and use the service before first verifying the email address used to sign up with. I guess basic verification is trumped by the need for "active users" to motivate these "stroke of genius" articles.

I still don't understand why Instagram took off, and I'm a millennial. You could already post pictures to Facebook when it came out. What advantage did Instagram offer over Facebook? Filters? I just don't understand my peers and I don't understand this industry sometimes.

A lot of people ask why Instagram took off and get a lot of different answers; I think that's the beauty of it.

I wasn't much of an Instagram user for a long time until I started getting into photography. I'm still a total photography noob, but now my feed of pictures is a combination of friends and amazing photographers that serve as inspiration. I deleted my Facebook account a couple months ago and haven't looked back. Whereas I spent time on Facebook scrolling through vitriol, my time on Instragram is a constant stream of friends lives and beautiful pictures.

I've realized a shallow social network is all the social network I need.

I was in San Francisco when this deal was announced, and I still remember all my supposedly "tech startup savvy" friends mocking Zuckerberg for buying a company with no revenue for a billion dollars. And yes, this group even included a Harvard MBA grad pursuing a career in entrepreneurship. It's no coincidence that this was the same group of people who also mocked Facebook's IPO valuation of ~$70B as proof that we were in a bubble. If there's one thing I learned from that experience, it's that people have no idea how to appraise high-growth high-potential ventures.

I'm doing art as a career after my comp sci degree and instagram is just perfect for me. The fact that instagram is pics only works very well for visual artists, and I can easily share my work in a relaxed sort of way. There's also tons of other artists on instagram and the way it's set up I can easily see their pieces of artwork much more easily than twitter/facebook.

Also Facebook is a dead zone now. It's literally pointless political junk and rehashed memes shared by "friends". The only usage I have for it is to message my old friends.

Snapchat is also starting to die out ever since Instagram implemented their own "snapchat" feature. IMO Instagram just does it better than snapchat. Snapchat is just too bloated for me. See with instagram I can follow someone like Kanye West, see his life in cool pics and his "snaps" in a convenient way.

Instagram is what will take down Snapchat or at least defend Facebook against Snapchat's offensive. Snapchat refuses, for good reason to its product, to enact discoverability which Instagram (thanks to Facebook's expertise) handles perfectly.

Instagram has also added to Facebook's main product through the autoplay videos, a tech that FB engineers could not get to work until the Instagram acquisition.

I never mocked this deal. I always thought this was a small price to pay for insurance that Instagram wouldn't subsume Facebook. It was only 1% of Facebook.

But I still think $19 billion was about $17 billion too much for WhatsApp. It's a messaging product that doesn't directly threaten Facebook the way Instagram does. They could have created 10x $1 billion teams to compete and easily done better than what they have with WhatsApp. It seems like a cowardly use of $19 billion. Oculus cost them $2 billion and there are many other breakthroughs that are equally underpriced.

Yep. Kudos to them, I remember thinking this was one of the dumbest acquisitions I had heard of, until Whatsapp. But Instagram has been a huge success.

The thing is that the Instagram and Whatsapp acquisitions are responsible for fueling this idea that all you need to do is create growth, and Facebook and/or Google will pay billions to acquire your company. Snapchat would never have gotten funding if there wasn't this dream that this could happen. We'll see if Snapchat is worth the $25B that is purported for their IPO, but I think those two acquisitions were the catalyst for all of this.

Playing the lottery and winning doesn't retroactively make you a tactical genius. Facebook saw a rising competitor, threw cash at the problem- a page from the playbook of practically every major corporation in history- and in the fullness of time came out ahead. This is a story about nothing.

I think the brilliance of Instagram is how they solved the problem of image ratio and rendering to different devices -- make everything a square. About the time of the purchase Facebook engineers and designers had been making talks about image layout. (I'll edit this if I find the bookmark to the link.) I wonder if the purchase was an extension of finding a working solution to working with images of different sizes, orientations, and aspect ratios.

Let's be honest here. FB bought IG because they were the competition and what was the end result? Those guys at IG got benched for years and haven't done much since. So they won, if money was the goal. But if the goal was winning hearts and minds, and making an impact on the industry, they got sidelined. They lost. IG will never reach it's full potential. So yes, a good purchase on FB's part. But for IG? Debatable.

I want to be excited by a good non-Apple laptop. But they're all just so terribly designed. They're always some sort of plastic, feel flimsy and bendy, have grills and screw holes and uneven surfaces and stickers everywhere.

I had a T530 for a few years at my first job and just hated it. It had fantastic specs but it felt awful and unreliable and like I had to babysit it. I got a 2015 MBP to replace it after IT damaged it (broke off a bunch of plastic bits from two grills) and while it was a migraine getting Windows and Ubuntu to dual boot, I don't think twice when closing the lid and slipping it into my backpack to go home. Tossing my backpack into my trunk, or in the overhead carry-on.

I would pay a fortune to have a solid chassis (not case) metal non-Apple laptop available.

- Pretty decent touchpad (not on a par with a MacBook, but good enough for three-finger gestures, although maddeningly imprecise at times)

- Despite the color/brightness issues, the HIDPI screen is sharp and readable under Windows 10

- The fingerprint sensor actually works (but doesn't hold a candle to TouchID)

- DisplayPort or HDMI output (depending on model, mine has both, and I use my Apple dongle for DP->VGA)

I disabled TrackPoint within a month due to RSI (I used original IBM laptops for years and it was a recurring issue for me).

TrackPoint is much more precise than the trackpad for accurate positioning, but I'd rather carry a Bluetooth mouse and retain the use of my fingers (whereas I'm perfectly fine with the Mac trackpad for drawing diagrams and pixel-level positioning).

Edit: Oh, and I run Linux on it through Hyper-V and Docker, since I need to run Windows 10. Had no trouble booting a couple of Ubuntu/Elementary Live USB drives for playing around, most of the hardware seemed to work.

A large chunk of the Thinkpad user community is pretty fed up with the ultrabook spec that Lenovo is shipping on its flagship models, namely the inability to effectively customize and modify to use. I keep an X201 alongside my late 2013 rMBP simply because I can use expresscard and a dock and ethernet without carrying around a stupid dongle. I understand they are trying to chase Apple's market, but expect them to fall flat with products like this. I can stand to have a few extra mm of thickness to actually have a usable product.

51nb (chinese forum) has been addressing the need for upgraded specs in the old chassis with things like their Thinkpad X62 https://imgur.com/a/As6On#uHTzOer (you can find the boards on ebay)

There's no products that fit the old ultraportable form factor. The T5xx series is great but can't be lugged around easily.

My Thinkpad X1 Carbon 2nd gen is still going strong 3 years later, but this 5th gen is pretty tempting.

Main questions:

- Are these "up to 15.5 hours of battery life" numbers purely theoretical? Even brand new, my 2nd gen only got about 6 hours of normal usage. I run Ubuntu so maybe these power saving tweaks are the result of closer hardware integration with Windows?

- The mobile broadband option (Qualcomm Snapdragon X7) is intriguing, it would save me carrying around a separate hotspot. Has this worked for any Linux users? What carrier do you recommend?

- Any reports of the Wigig Dock working on Linux?

Pros that I can see:

- Real function keys! The adaptive touchpad on my 2nd gen is awkward and silly -- can't believe Apple followed their lead on this. Good riddance.

- That the 5th gen is even smaller and lighter boggles the mind. My 2nd gen is already ridiculously thin. After 5 years of hauling 5-8lb T4x models around NYC my back was killing me, and I was probably on the verge of permanent physical injury. The X1 was a godsend.

- Same old Trackpoint I know and love. Not for everyone, but a mouse on the home row + vim is ergonomic heaven for me. Never change, Thinkpads!

Cons that I can see:

- Looks like almost the same display as my 2nd gen, a 2560x1440 WQHD IPS. A small bump up in nits but that's it. Viewing angles are better than on older laptops, but I still can't read my screen in bright environments, and it gets gummed up with debris and smudges way too easily. Apple continues to dominate in this dimension.

- Matter of taste, but the Silver design feels like Lenovo is trying way too hard to look Apple-y.

As someone who is constantly on the road, having a built in sim card for connectivity is something I'm super excited about. This could be the 2016 MBP we wanted, but switching to windows still sounds difficult.

I absolutely adore my ThinkPad X201, if there was any way to purchase a pristine new-in-box X201 I would. However, I recently picked up a 3rd gen x1 carbon and absolutely love it compared to my rMBP '15. Sure, it's not quite as fast and the screen isn't quite as good, but I can run a true tiling window manager (not the garbage that is KWM on OSX) and have a unfettered dev platform.

I run Fedora, I like it more than Ubuntu or Debian and like the fact that I don't have to worry about random underlying features breaking all the time when I update (which wasted a ton of my time when I was using Arch Linux).

Why is the webdesign so amateurish. I really don't understand these companies that can't put proper priority on having webpages that look really good and luxurious. Spend millions on great industrial design but can't spend the thousands for a decent sales page.

It's not the sexiest laptop in the world, but my work Dell Latitude 7470 with an I7 6600U, 16GB DDR4 RAM, NVMe drive with a 1920x1080 screen is the best PC laptop I've used. Works great with the Dell dock, where I have two Ultrasharp 21.5 inch monitors in 1080p, with the laptop screen flipped open; three usable screens.

Ubuntu on a ThinkPad X1 Carbon is my workstation. As a Go developer I only need Chrome, a terminal, and Go itself (which I install from the official tarball binaries).

Now that I have Google Fi I'm kicking myself for not getting the cell radio builtin as Google will send you a data only SIM for free! I would never have to deal with another terrible airport or hotel captive portal again!

My coworkers are all Macbook Pro users and ask why I don't get one too. I just don't see the point. I think MBP's build quality is nicer but otherwise I get twice the machine for the same price and get to develop on the same OS as my project primarily targets (Linux).

"X1 Carbon is available with Microsoft Windows 10 Pro Signature Edition. No more trialware or unwanted apps. No more distractionsand easy provisioning for IT pros."

So do that with every version, and stop feeding people "unwanted apps" altogether. You just admitted that nobody wants them, and this irresponsible attitude is exactly why you got bopped for pre-installing the Superfish MITM malware.

I have the 2016 model running Arch Linux. The machine is a great piece of engineering. Light as a feather, and feels durable. The carbon fiber body feels great in hand. Performance wise, I have the i7 model 16G RAM and 512GB NVMe SSD. The only slight negative would be battery life. I expected more, but it does last me a full 8hr day of work with screen dimmed. I'm looking forward to purchasing new in another year or so.

As someone who is considering moving from a MacBook Pro 2015 to a Linux laptop, the keyboard and, in particular, the trackpad on PC models are the biggest disappointments. Apple has spent so much effort on the tactility of their laptops, while PC manufacturers still seem to be stuck in an earlier decade. I was a little bit shocked to discover that Thinkpads still have that terrible little mouse nipple, which I hadn't seen in about 10 years.

The laptop with the most promising keyboard/trackpad combo, that I have found, is the Dell XPS (it has physical trackpad buttons, but at least they're located at the bottom), which is probably not accidental; it looks a lot like a MacBook, too.

Please support the idea of building a better planet for our children and do not buy laptops where you can not replace the battery - managers in companies that build these kind of products have to learn that they are acting against human interests and need to change their way of thinking.

Of course this applies to all Laptops where you can not change battery.

Still running my 2012 x1 carbon... One thing that has kept it going is that it's very serviceable. I hope the new model is as friendly towards component replacement, as I've had to swap out the DC power harness and cooling assembly so far.

I owned a first generation X1 for a while. Eventually sold it because it only had 4GB of RAM, and there is not much of a source of the proprietary SSD. The battery was down to 3 hours, and I wasn't going to replace it if I couldn't put at least 8GB/512GB into it.

I really liked the Lenovo X1. From 2012 to now I've went from the MacBook Pro to MacBook Air and now the MacBook Pro Retina. With the Lenovo, the touchpad wasn't quite as good, the power adapter wasn't quite as compact, the battery life wasn't quite as good (but all were good enough).

The screen and keyboard were very good. The trackpoint is a nice addition. The Mini DisplayPort worked with my 27" Apple Cinema Display without issues in both Windows and Linux (Ubuntu worked perfectly BTW). Build quality on the machine was great. Didn't run hot or anything, had all the ports that my Mac did.

Later on they sabotaged the function key row and ruined the touchpad. After a year of customer complaints they put it back, but the things were so expensive I just stuck with a Mac. If I needed a Windows/Linux machine however, they would be my first choice.

If something like the current generation Thinkpad hardware (T or X series) could run OSX, it would be what the Macbook Pro used to be... Remember the first generation Intel macbook pro in 2006 which had a full complement of ports? Everything relevant and needed except RS232.

As someone looking to replace my 2013 13" MBP this is almost exactly what I want.

Enough has already been written about the new MBPs and why many of us will no longer consider them. I've already tried the Kaby Lake Razor Blade Stealth but returned it due to shocking quality and support issues. I considered the Asus Zenbook but it has too few ports to be a serious contender and the HP offerings all have screens with a lower resolution than I want.

I've always avoided Lenovo, party because Apple were building machines I wanted and partly because my experience of Lenovo to date has been low end cheaper models which suck. I'm willing to give them a chance at the upper end with this though. The sooner to market with this the better

I've had the X1 Carbon 3rd gen (Refurbished) for a few years now as my personal laptop. I don't do much heavy lifting with it, mostly League of Legends, Counter-strike, and some side programming. But it's my favorite laptop I've used. The 14" size is perfect for me and I like the feel of the rest of it. Performs well and didn't break the bank.

(I've always owned windows PCs, though I've used MBP at work for 2 years now, and have had a MacBook Air that I resold because it was too small and didn't have a niche to fill after my ultrabook and my ipad.)

Asus ZenBook working well for me with linux for the last 3 years. I've always had good linux support from thinkpads too. The Asus replaced an apple macbook pro, the model where apple shipped broken GPUs and didn't recall, so that one is a very expensive web-browser that constantly panics and reboots, giving me the opportunity to write a sentence to apple about how I feel about their miserable company that I'm sure nobody will ever read. "The Donald Trump of Computing Companies." Apple really are amazing though. Top comment is an apple fan boy desperate to continue to believe in apple and affronted by the existence of other laptops while other kool-aid drinkers up vote even though it's got stuff all to do with the ThinkPad X1 Carbon. Probably hasn't even got an apple logo shaved into his head!

What happened to the Carbon touch? This would be my next computer except it doesn't have touch. As a developer I use Chrome dev tools emulator with my Thinkpad W510 to perform touch testing. Plus it is fantastic for annotating presentations (Ubuntu, Compiz Annotate plugin).

I spent a year waiting for the Yoga x260 thinking I had found the perfect machine, after 11 months I am getting ready to send it in for the 3rd time for a "freezing" trackpad, remedied only by plugging in an external mouse. Also, lack of Linux support is insane for a company selling a working man's machine like this, going back to Dell after this :(

id upgrade to one of these in a heartbeat if i could afford to but right now thats not an option, thankfully my current gen2 x1 carbon is still an excellent machine that can handle most everything i throw at it

I think it is super important to have honest discussion between developers, with as little flame possible about machines we are using.

I am using 5 yr old Macbook Air for development, it works well. I always imagined that I will use ThinkPad /w some Linux distribution in future. It is not trivial to use Linux on your laptop. While a lot of things will be faster, browser most likely will be slower and there will not be as many tools. What I would mostly miss is photography tools, Lightroom is essential for processing photos. I can't not do it.

Let me add one more thing to this fairly random post :). I think X1 is better then Macbooks and Airs at the moment, docking station makes a lot of difference.

I have an X1 gen 3, and it has a dongle for the ethernet port. Given the thickness of it, I don't see how this new gen machine can hold a 'native rj45'. Perhaps it's one of those hinged ones that opens out?

At Google, if you want to implement new features (or large refactoring), you'll need to write a design doc. In which, you should answer questions your reviewers might ask (common questions like: why do you want to do this, what are the alternatives, how components interactive with each other before/after your change). This is something like Python's PEP: you need a proposal to convince your reviewer that you have put thought into your change.

Also, dont send requests out of the blue. The original maintainer has to know that youre working on something. One reason is that your changes might collide spectacularly with other planned changes you werent aware of. Another reason is that the maintainer might say no to the entire idea, much less the implementation, and save you time.

The mere creation of a fork isnt a sufficient signal, either; the project maintainer isnt going to treat that as a sign that youre actually working on something. (There seem to be an insane number of forks out there that are created and never changed again, apparently used to pad rsums by having important-sounding projects listed on user profiles.)

You know what would be cool? If I could create a fork off the project i was using. Then I write a feature I need in that project. it becomes a PR, but then the fork is also automatically (if possible) updated whenever the main branch is updated. If it can't be updated automatically you are notified to update your fork against the upstream changes. This would have many benefits, including easy testing of PRs, forks that don't stale, and overall helping closing the loop on open PRs.

One thing that I find different about github vs previously (e.g on sourceforge) when you had to sort of sign up to be part of the group to propose a change, is that people feel a lot more free to suggest out of the blue quite impactful but ultimately rather superficial changes to a project. They argue and argue to get these changes accepted, and then disappear.

On several projects I've been on, I get issues or pull requests proposing to change the entire build system of a project. As you know, for C/C++ projects, the build system can be non-trivial, and maybe many years have gone into getting it to work well. And as things change, we adapt it. But as soon as it's not the flavour of the week, you get github requests suggesting to change to a completely different one, to suit some or other system's needs.

A tweak here, a tweak there, or an entire overhaul being proposed from people who haven't contributed to the actual code base at all. These infrastructure "suggestions" from people who aren't invested in the project but love to play with scripts built up "around" the code get very annoying. I don't know what the difference is exactly but it didn't happen with such frequency when things were more oriented around mailing lists.

I've now got 3 projects that have at least two build systems each, because of random people's preferences. That is a lot of extra work to maintain that is orthogonal to the actual project source code. I've started closing PRs that make infrastructural changes that I don't want to be responsible for, unless I can get the submitter to promise he'll be around for a while to maintain it. I've also started forcing people to put such changes in subfolders so that it's clear which one is the "supported" system. And I haven't shied from "assigning" subsequent bugs back to the original PR submitter. But sometimes that doesn't even solicit a response.

People: if you are going to suggest switching a project to a completely different build system, and then disappear and not promise to maintain said system, please think twice about changing something just because it doesn't suit your preferences of the week.

Between the responses here and on the more recent Chrome for business posting, i find myself wonder if there is a ever widening split between the "push to prod" web dev mentality, and the "clasic software" mentality.

This article gives a nice view for someone who hasn't had that much experience with OSS development like myself. While I'm kinda familiar with CI systems and the concept of coverage, could someone explain to me what the author means by "happy path" in coverage? Is that considered the most used path in standard behaviour?

If a given PR would be acceptable except for its maintenance burden, and you do not expect the PR submitter to provide sufficient help with maintenance to compensate, you could request the difference from them as payment for acceptance of the PR.

Amazing work. Hector Martin (@marcan42) is an incredibly talented hacker. I still remember his post for enabling the hardware virtualization of the CPU in his laptop, an Acer Aspire 5930 [1], that had it disabled in the BIOS, and it was not user serviceable (I had a similar laptop, with smaller screen, so his post was useful for me). Also his hacks and comments in a Spanish forum [2] for the PS2, Nintendo consoles, and others were plenty insightful. Then the PS3 hack. And now, getting Linux working in the PS4, even with 3D acceleration (without help, just with few specs found in the web (!!!)). It is mind blowing :-)

Everytime console hacking comes up I start to wonder how well a manufacturer would do if their next console was open. Would they see a decrease in legit purchases if the console was open for hacking and exploitation? I would think it would be a lot like the PC game market which as far as I can tell is still thriving today.

So assuming there is no economic impact, what is it that makes us want to lock down consoles (similarly cell phones) when we do not do the same to the personal computer we hold so dearly? It is a fascinating story that I suspect is due to timing and when devices hit markets but curious what others think about this.

"the CDude class, which is the main, player-controlled character in the game."

"CDoofus is the class for the enemy guys [...] Started this file from from CDude and modified it to do some enemy logic using the same assets as the sample 2D guy."

"COstrich is the object for the ostriches wandering about in the game. class COstrich : public CDoofus"

Hilarious. Reminds me of CBruce in Tony Hawk's Pro Skater:"Their code [...] originally written for Apocalypse[...] a Playstation game featuring Bruce Willis, which, we learned, is why in Tony Hawk the code for the classes of skaters is called CBruce."

This reminds me, I actually had some contact with the guys behind RWS many years ago, I was a young man and very excited about HTML/CSS and design in general, I ended up designing their forum (it was running on IPB): http://tinyimg.io/i/5GTDNgx.jpg

Postal 2 is one of my favourite games ever. Had a blast in both single player and online. Me and my friends still talk about it after all these years!

This, as well as being a fantastic gesture, makes it seem as though RWS understands something that many of its contemporaries don't: Games, or rather, their engines, must be open-sourced for those games to continue to be playable and relevant. You can't update your games forever, and sooner or later, they will be rendered unplayable by the inexorable march of technology. If you open source your engine, that doesn't have to happen.

Take a look at Doom. New content for Doom 1 and Doom 2 is still being released by the community, long after competitors like Duke3D have stopped. Why? Because Doom has a passionate community, and many modern, open source engines that make running the game on new systems a piece of cake.

I was pleasantly surprised by the amount of useful comments in the code.

I remember when the source code for Descent was released; not only was the code somewhat opaque (unless you were experienced with portal-style engines), but there were hardly any kind of comments to help guide you along.

This is completely off-topic, but in the vein of talking about old games. Does anyone have any suggestions for games that scratched the same itch as the old school RTS games like Age of Empires and Rise of Nations?

I've been searching for years but never found anything that eclipses the classics.

There are some interesting notes about how the multiplayer was originally implemented:[0]

"Once the game is running, everything is peer-to-peer. The only information the peers send each other is the local players' input data, which is encoded as a 32-bit value (where various bits indicate whether the player is running or walking, which direction, whether the fire button was pressed, etc.). No position, velocity or accelleration data is transmitted. NOTHING else is transmitted."

In order for this to work, they had to make sure all memory was initialized to the same values, so that each client had the same known starting state. They also had to use the same PRNG, initialized to the same state, so that a known, deterministic pattern would be produced. But eventually they ran into a problem that couldn't be solved in software: The FPU of different CPUs would not return the same results for the same inputs:

"However, dispite our best efforts, there was still a serious flaw lurking behind the scenes that eventually caused serious problems that we couldn't work around. It seems that different Floating Point Units return slightly different results given the same values. This was first seen when pitting PC and Mac versions of the game against each other in multiplayer mode. Every once in a while, the two versions would go out of sync with one another. It was eventually tracked down to slightly different floating point results that accumulated over time until eventually they resulted in two different courses of action on each client. For instance, on the PC a character might get hit by a bullet, while on the Mac the same character would be just 1 pixel out of the way and the bullet would miss. Once something like that happens, the two clients would be hopelessly out-of-sync."

The thing that struck me is "anyone who is fascinated by computers and spends all their free time playing with them can be a developer."

Back before developers were perceived as 'rich' and 'pampered' there were people who were fascinated by computers and spent all their time playing with them and were called 'nerds.' Then it became "cool" to be a developer or "you can get rich as a developer at a startup!" and then you get people who don't care at all about computers and really never have, working as developers.

My litmus test is often to ask someone when they show me a solution, "what other solutions did you consider?" If they have wandered around looking at different ways to attack the problem they are more typically 'nerd' type developers, if their response is "none, this works so I went with it, moving on." they are often just working a day job. Watching the two types of people from the late 90's to today, the people in it for the money burn out much more frequently.

A bit of background: in Austria, many people do an "Apprenticeship" ("Lehre") instead of going to high school. You work at a company and visit a vocational school (about 20% of time).

This is great for practical people -- less theory, more real world experience. But there is a major downside: If you didn't go to high school, you are not allowed to go to university without first completing preparatory courses that can take years.

There is also an upside: If you've worked for at least 4 years, and are under 30 years old, you automatically qualify for "Selbsterhalterstipendium", which is around 700 per month to cover your cost of living while studying at university (you don't have to pay this back, and there also is no tuition)

Another good example of someone learning to program on their own because they wanted to and then leveraging that experience to get a job doing it professionally. The big secret to learning to program is that there is no big secret. It's basically a glorified trade job and everyone already has the tools in front of them.

That was a great read. Held back by parent's preferences into a secretary position, starts experimenting for fun/laziness (many great works started that way), keeps improving, fight the fight in college, and now at SAP working with serious tech. Congratulations on making it to finish line, Denise!

So, you've worked from Excel to GUI's to databases to web stuff. You plan on trying a new paradigm of programming or what for the next level?

Note: Also cool you did karate on the side. I got my start in DOS apps (QBASIC), doing Windows apps in VB6 in mundane, forced position, and karate on the side. Built new things in between assignments, including learning heavyweight stuff, because I was bored with VB or too lazy for some tedious task. The similarities in where we started probably added to my enjoyment of it. Also, I learned a new way to do a frown in text. I'm sure some tech project or new JS framework on HN will give me a use for it in near future. ;)

I don't think you can be a developer in 8 - 12 weeks, as mentioned in this article. Software Development is a skill as much as anything else, and there is no time frame. All you can use to assure yourself is if you have practice, and the confidence in yourself by that practice. For some people that confidence comes after a year, maybe even two years. But then again, that confidence can even come in 2 months.

That's a great transition, congrats to the author. My wife's mother also went from a secretary to the COO of a multi-billion dollar real estate company. Similarly, the current CEO of Xerox, Ursula Burns, was an executive assistant at Xerox. Although rare, it seems like things like that happened a lot more often before than now. I'm not sure if it means that as a society we have more opportunity or that we are more pigeonholed in our careers. Maybe it's the free-agent nature of our employment these days, but I don't picture execute assistants these days ever getting the opportunity of jumping into something completely different and rising to the rank of C-level.

> There are also a lot of developer bootcamps: within 812 weeks you can become a developer. I think this is great if you want to become a developer within a small agency or working in house. Those fast tracks mainly teach you how to code, but not other important stuff like software engineering, algorithms and data structures, patterns, databases, theoretical stuff about computers and so on which you would need in bigger projects. Bigger companies mostly want you to have formal education. The same is true when you want to climb up the corporate ladder. Universities dont really teach you how to code, but they teach you timeless things! I never regretted my hard way, because I learned so many different things.

While it's true that a lot of organizations still put a lot of value on a traditional education, the idea that going to a bootcamp qualifies you for work "within a small agency or working in house" just seems condescending. I work for a fortune 500 company and we hire bootcamp grads all the time, many of them have gone from apprentices to junior to mid level engineers in just a couple years, they're fucking fantastic.

I strongly feel that getting relevant applicable skills is essentially to starting a career in software engineering, and that more theoretical skills can then be acquired along the way. I've seen it several times.

Another Delphi person!! Yay! I spent so much of my life writing Delphi code. As a teenager. Basically from 12-18 I wrote Delphi/Pascal. Hundreds of stupid little Windows apps (and some stupid big ones).

I ran into the same sort of thing you did re: Delphi jobs. I was pretty sad, honestly, having started in the original Delphi days (version 1!) and going to 6 I had gotten pretty good at it...

Well done. Interesting that you couldn't get a job coding in Delphi after you won the competition. I remember back around the same time, here in Australia there used to be quite a few Delphi related jobs around. Perhaps it was different in Europe.

Did you consider writing a stand alone app in Delphi that you could package and sell?

My mother worked with someone well above this woman's age who went from basically the department admin (admittedly, in IT with a programming group) to what was apparently a pretty solid Lotus Notes admin, though she did a bit of job hopping in the process before ending back at the same company where she'd started.

Many administrative jobs probably offer a variety of paths that could lead to this. The person working can be someone who does the job as presented to them, or they can be the person who finds out what's needed and figures out the way to do it, learning along the way. An awful lot of programs are written because someone with the skills wants to automate something they find boring.

I really don't think the 8-12 week code camp means you are a truly competent developer. Sure you can hack around on x or y js framework of the month. But data structures, algos, big O...all of that comes into play at some point as a software engineer. You don't use it every day, or even every week. But it does happen.

And tbh, you can really tell the quality of candidate of code camp vs 4 year degree. We hired a code camp candidate, just to see how it played out. It didn't work that well.

This isn't a comment about the author, as much as it's about something she said.

"Today you can take a lot of programming and Computer Sciences courses online. Everyone can be developer! There are also a lot of developer bootcamps: within 812 weeks you can become a developer."

This is a very dangerous line of thinking. Some people have convinced themselves that they are competent developers because they went to a bootcamp. And they might have just enough domain knowledge to convince a company with poor hiring practices that they're worth hiring.

I inherited a situation like that (this dev was hired a few weeks before me). After a few weeks it was painfully obvious that this guy was a detriment to the company because of his lack of coding ability. For reasons above my paygrade, we couldn't fire him immediately, and eventually we took all responsibilities away from him. We paid someone to come in and not do work for us.

I've interviewed dozens of developers since then. The one's coming from a bootcamp (or similar situation) have no computer science skills. They also have no problem solving skills; they're unable to break through the box that they were taught in. Most companies can't afford to hire a developer who knows one thing, and one thing only.

Now, we've had 4 year university graduates with experience in the field come in from top schools with degrees in CS. A (scarily) large percentage of them are incompetent as well, though not to the degree of the bootcampers. They're typically serviceable though.

Ok. So I started reading this and just couldn't. Too many -comments and I just couldn't follow the article flow. Kudos to this person for putting in the effort to do what they want. I just can't get over the writing style.