It allows cherry-picking icons from dozens of different SVG icon packs (including Feather) and packaging them into a custom (web)font. It can also be used to package your own SVG assets into a single font file and then use resulting .woff as an SVG sprite sheet.

All these icon collections are great, but I always find missing stuff there, and I don't know how this can be fixed besides complaining (and I don't want to complain when they give it for free).

For example, this one looks really nice, but it only has 'battery' and 'battery-charging'. In an app I work on, I need battery with no charge, 25%, 50%, 75%, 100%, and charging.

Then I think that this would be small to add, but again I don't want to complain. Then maybe someone will need a battery with a sliding charge, so they can represent as many charge percentages as the pixels allow. And then this becomes a whole other things, and not just an icon collection..

Cool site! Always a fan of getting more design options out there for icons / UX in applications.

Bit of a tangent for product folks:

A lot of this stuff actually does look beautiful, but the use of "beautiful" as a modifier has diluted the term for me. When I click on something labeled "beautiful" I almost always expect to see "meh", and most of the time, that's exactly what happens.

I understand this is not inherently presented here as any particular solution or competitor, although its presence here elevates it as such, so why use this or do this instead of using the library at thenounproject.com ?

Took the course. A lot of it is cruft and motivation for the underlying core ideas. The techniques suggested are things many people are already familiar with: recall, deliberate practice, interleaving, spaced repetition, Einstellung, Pomodoro, Feynman Method, Cornell notes or similar (to force recall), exercise regularly, sleep well, focus on concepts not facts (chunking), etc. A composite of these dramatically enhances the learning process.

I can post some of the notes I took on the course if anyone is genuinely curious. The key premise of the course is that the brute force approach people usually take to learning is highly inefficient and ultimately ineffective (you'll forget).

I think I recall that not too long ago, the most popular course on coursera was Ng's ML course. It is ironic that people are now more interested in teaching oneself how to learn versus a machine. This change could be attributed to other reasons like change in user demographics, or, market saturation, so that naturally popular courses will change once a large majority moves from one to the next. But I want to believe there is a more interesting phenomenon occurring where reading about abstract notions of learning causes a person to question how they themselves learn, and if the same abstract concepts apply. This is more a whimsical thought, than a serious one.

The second reason this is interesting is it could be surfacing a real issue with the way we have become accustomed to ingesting data. Could it be that we are becoming aware and fearful that the long term effects of suckling the internet's spout of instant gratification is causing serious harm to our ability to "actually learn".

Neither may be the case, but it seems like there is something interesting going on here.

Dr Oakley also wrote "A Mind for Numbers", which is essentially this course in text form. The book is great as a a basis for the theory of learning, and dives into the same content (diffuse vs focused thinking, skimming chapters before reading etc.).

I find having a text reference with dedicated time makes me learn more, so if you're interested in the course you'd probably also love the book.

This course revolutionized my views on learning. After taking it and applying the suggested techniques I've seen an amazing increase not just in my competence but my confidence. It left me feeling empowered. I was almost a bit sad when I reached the end.

I'll report in too. I took the course and thought it was excellent. I love learning and have been learning new things for decades and thought my techniques were pretty good. I'm a very fast learner. The course helped me more than I was expecting and my learning speed and ability to memorize noticeably improved. Especially with the foreign language I'm studying. And the theories around how the brain works were interesting. And it's a pretty short course.

Took the course, loved it. Bought the book, loved it. Encouraged my partner to check it out, she stuck through it. 3 years later she's about to graduate from college with her basic counselling education and experience behind her where she hit top of the class. She's about to set out on her own. This course was a massive driver and I'm not sure she'd have gone this way this quickly without it.

Can anyone of you report any long term benefits from these kind of courses? Personally, I think those classes (haven't looked into the coursera one) only present obvious stuff.

I've once worked through "Make it stick", a book that is often recommended when it comes to learning. What I've found is that there is nothing wrong with the content but it did not really help.

I imagine that most people who struggle with learning deal with some kind of psychological issues that need to get addressed. They need to learn how to deal with stuff like frustration, worries, perfectionism or self esteem.

Fantastic that resources like this now exist. In some ways it seems to be reminding us about how we used to learn. Children spontaneously go back again and again to things that delight them (spaced repetition) and they switch activities when bored (Pomodoro). Unfortunately, perhaps as a result of schooling, or other hard knocks, the spontaneous impulse gets lost. Adults suffer from mixed motivations and seem to be fairly clueless about what they find genuinely interesting. It becomes difficult to approach topics playfully.

There is a book from the 80s with the same name "Learning How to Learn" by Gowin & Novak. The book was very influential to the UX field. Concept Maps -the technique presented in the book- is used a lot to understand user mental models.The book is 80% discussion about how to apply the technique in a classroom... 20% explaining the technique, but anyways it's worth the read.

Edit: small correction, according to google the book was published in 1984

Sounds to me like a lot of people are searching for a course which will allow one to overcome a lack of intrinsic motivation. But all the best tools in the world wont make you a smith, if you find no fun in hammering red hot iron.

"Reflections on Trusting Trust" by Ken Thompson is one of my favorites.

Most papers by Jon Bentley (e.g. A Sample of Brilliance) are also great reads.

I'm a frequent contributor to Fermat's Library, which posts an annotated paper (CS, Math and Physics mainly) every week. If you are looking for interesting papers to read, I would strongly recommend checking it out - http://fermatslibrary.com/

I would never call it my "all-time favorite" (no paper qualifies for that title in my book), but Satoshi Nakamoto's paper, "Bitcoin: A Peer-to-Peer Electronic Cash System" deserves a mention here, because it proposed the first-known solution to the double-spending problem in a masterless peer-to-peer network, with Byzantine fault tolerance (i.e., in a manner resistant to fraudulent nodes attempting to game the rules), via a clever application of proof-of-work:

Others in this thread have already mentioned papers or opinionated essays that quickly came to mind, including "Reflections on Trusting Trust" by Ken Thompson, "A Mathematical Theory of Communication" by Claude Shannon (incredibly well-written and easy-to-follow given the subject matter), and "Recursive Functions of Symbolic Expressions and Their Computation by Machine" by John McCarthy.

I would also mention "On Computable Numbers, with an Application to the Entscheidungsproblem" by Alan Turing, "On Formally Undecidable Propositions of Principia Mathematica And Related Systems" by Kurt Gdel, and "The Complexity of Theorem Proving Procedures" by Stephen Cook, but in my view these papers are 'unnecessarily' challenging or time-consuming to read, to the point that I think it's better to read textbooks (or popular works like "Gdel, Escher, and Bach" by Douglas Hofstadter) covering the same topics instead of the original papers. Still, these papers are foundational.

Finally, I think "The Mythical Man-Month" by Fred Brooks, and "Worse is Better" by Richard Gabriel merit inclusion here, given their influence.

This is by no means an exhaustive list. Many -- many -- other worthy papers will surely come to mind over the course of the day that I won't have a chance to mention here.

There are many other good recommendations elsewhere in this thread, including papers/essays I have not yet read :-)

The first half of the paper is a spot-on critique of so many things that go wrong in the process of designing and implementing large-scale software systems. The second half, where the authors propose a solution, kind of goes off the rails a bit into impracticality... but they definitely point in a promising direction, even if nobody ever uses their concrete suggestions.

programming properly should be regarded as an activity by which the programmers form or achieve a certain kind of insight, a theory, of the matters at hand. This suggestion is in contrast to what appears to be a more common notion, that programming should be regarded as a production of a program and certain other texts.

I've been trying to get it frontpaged because, despite it's length, it's perhaps one of the most startling papers of this decade. Sadly, it seems like the HN voting gestalt hasn't decided to upvote a paper that's the CS equivalent of breaking the speed of light:

It is possible, with some proper insight and approaches, to sort general datastructures in linear time on modern computing hardware. The speed limit of sort is O(n) with some extra constant cost (often accrued by allocation). It works by decomposing and generalizing something akin to radix sort, leveraging a composable pass of linear discriminators to do the work.

There's a followup paper using this to make a very efficient in-memory database that one could easily generalize under something like kademelia and with care I suspect could make something like a better spark core.

I keep submitting and talking about this but no one seems to pick up on it. This paper is crazy important and every runtime environment SHOULD be scrambling to get this entire approach well-integrated into their stdlib.

"Following the popularity of MapReduce, a whole ecosystemof Apache Incubator Projects has emerged that all solve thesame problem. Famous examples include Apache Hadoop,Apache Spark, Apache Pikachu, Apache Pig, German Sparkand Apache Hive [1]. However, these have proven to beunusable because they require the user to write code in Java.Another solution to distributed programming has beenproposed by Microsoft with their innovative Excel system. Inlarge companies, distributed execution can be achieved usingMicrosoft Excel by having hundreds of people all sitting ontheir own machine working with Excel spreadsheets. Thesehundreds of people e combined can easily do the work of asingle database server."

PS: This thread is great, i'm bookmarking because here there are good (serious) papers.

I know Thompson's "Reflections on Trust" and Shannon's "Communication" papers are more famous but I believe BCS's "Correctness" paper has more immediate relevance to a wider population of programmers.

For example, I don't believe Ethereum's creator, Vitalik Buterin, is familiar with it because if he was, he would have realized that "code is law" is not possible and therefore he would have predicted the DAO hack and subsequent fork/reversal to undo the code.

Seriously, if you read BCS's paper and generalize its lessons learned, you will see that the DAO hack and its reversal as inevitable.

Diego Ongaro's Raft paper[1]. Perhaps this only speaks to my experience as a student but having surveyed some of the other papers in the domain (paxos[2] in its many variants: generalized paxos[3], fpaxos[4], epaxos[5], qleases[6]), I'm glad the author expended the effort he did in making Raft as understandable (relatively) as it is.

A bit cliche for HN, but I really enjoyed RECURSIVE FUNCTIONS OF SYMBOLIC EXPRESSIONS AND THEIR COMPUTATION BY MACHINE (Part I) by John McCarthy[0]. It was accessible to someone whose background at the time was not CS and convinced me of the beauty of CS -- and lisp.

It might be a cliche one to pick, but I really really really enjoy Alan Turing's "Computing Machinery and Intelligence"[1]. This paper straddles the line between CS and philosophy, but I think it's an important read for anyone in either field. And a bonus is that it's very well-written and readable.

Yao's minimax principle. It's not a very exciting read or a very exciting conclusion compared to some of these other papers, but it's still interesting, and the conclusion has been practically useful to me a small handful of times.

It concerns randomized algorithms, which are algorithms that try to overcome worst case performance by randomizing their behavior, so that a malicious user can't know which input will be the worst case input this time.

The principle states that the expected cost of a randomized algorithm on a single input is no better or worse than the cost of a deterministic algorithm with random input.

Yao proves this is the case by constructing two zero sum games based around the algorithms' running times and then using game theory (specifically von Neumann's minimax theorem) to show that the two approaches are equivalent. It's a really neat approach!

There are a ton of fantastic Haskell papers, but if I had to pick one this would be it. It reconciles the pure and lazy functional nature of Haskell with the strict and often messy demands of the real world:

This paper, written during WW II (!) by someone who had around to 20 years of computing experience at that time (!!) introduced the world to the ideas like hypertext, and citation indexes. Google's PageRank algorithm can be seen as a recombining of ideas from this paper.

The Anatomy of a Large-Scale Hypertextual Web Search Engine, by Brin and Page.

Not only for the historical value of changing the world, and for the fact that it's very interesting and readable; It has personal value to me: the first CS paper I've ever read and it inspired me and changed the course of my life, literally.

Also, it has some very amusingly naive (in hindsight) stuff in it, like: "Google does not have any optimizations such as query caching, subindices on common terms, and other common optimizations. We intend to speed up Google considerably through distribution and hardware, software, and algorithmic improvements. Our target is to be able to handle several hundred queries per second"

I haven't read a ton of academic research in general, but in trying to understand CRDTs and concurrency, gritzko's paper on "Causal Trees"[1] struck me as incredibly smart and clear in its thinking. Many of the other CRDT papers I read (even influential ones) were flawed in a number of respects: blurred lines between design and implementation, blatant mistakes and typos, hasty and unconvincing conclusions, an overabundance of newly-minted terms and acronyms, dense proofs lacking any concrete examples, unintuitive divisions between operation history and state mutation. The Causal Trees paper is also dense and also invents a bunch of new vocabulary, but the logic is completely consistent (to the point of being unified under a single metaphor) and clearly explained every step of the way. The data format is also very clever, and the paper spends a good amount of time following the practical consequences of those design decisions, e.g. the ease of displaying inline changes, or of generating a particular revision of the document.

The Flajolet-Martin paper on counting unique items in an infinite stream with constant space [1]: a great, well-written introduction to streaming algorithms that triggered my first "aha" moment in the field. You never forget your first.

Mine is "Image Quilting for Texture Synthesis and Transfer" by Efros and Freeman. It's simple enough to implement as a personal project and has some nice visual output. Plus, Wang tiles are cool and it's fun to learn more about them.

... a completely lock-free operating system optimized using run-time code generation, written from scratch in assembly running on a homemade two-CPU SMP with a two-word compare-and-swap instructionyou know, nothing fancy.

Which (necessarily) undersells by a very large margin just how impressive, innovative, and interesting this thesis is.

If youre interested in operating systems, or compilers, or concurrency, or data structures, or real-time programming, or benchmarking, or optimization, you should read this thesis. Twenty-five years after it was published, it still provides a wealth of general inspiration and specific food for thought. Its also clearly and elegantly written. And, as a final bonus, its a snapshot from an era in which Sony made workstations and shipped its own, proprietary, version of Unix. Good times.

"Copycat is a model of analogy making and human cognition based on the concept of the parallel terraced scan, developed in 1988 by Douglas Hofstadter, Melanie Mitchell, and others at the Center for Research on Concepts and Cognition, Indiana University Bloomington. Copycat produces answers to such problems as "abc is to abd as ijk is to what?" (abc:abd :: ijk:?). Hofstadter and Mitchell consider analogy making as the core of high-level cognition, or high-level perception, as Hofstadter calls it, basic to recognition and categorization. High-level perception emerges from the spreading activity of many independent processes, called codelets, running in parallel, competing or cooperating. They create and destroy temporary perceptual constructs, probabilistically trying out variations to eventually produce an answer. The codelets rely on an associative network, slipnet, built on pre-programmed concepts and their associations (a long-term memory). The changing activation levels of the concepts make a conceptual overlap with neighboring concepts." -- https://en.wikipedia.org/wiki/Copycat_(software)

All of the classic papers I can think of have already been mentioned, but even though it's too recent to pass judgment a new contender may well be "Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design" - https://arxiv.org/abs/1704.01552

I would say An Agent-Oriented Programming by Yoav Shoham. It certainly set my mind going and made me think about how programs could be organized. I still think, agents, systems of agents, and mobile agent code has a place in computing. Even though some form of RPC over HTTP won over mobile code, I look at the spinning up of VMs and cannot help but think that agents have a place. Combined with the tuple space stuff from Yale, I still see a powerful way to go forward.

Trading Group Theory for Randomness by Laci Babai (http://dl.acm.org/citation.cfm?id=22192) -- this beautiful paper introduced algorithmic group theory & interactive proofs (in the form of Arthur-Merlin games) to study the Graph Isomorphism problem, and introduced several groundbreaking new results. Perhaps a more approachable (and funny) version of this would be Babai's humorous essay detailing the flurry of work that broke out after his results introducing AM/MA...it's the closest thing I've seen to making theoretical CS exhilarating :P (http://www.cs.princeton.edu/courses/archive/spr09/cos522/Bab...)

My favorite paper in computer systems is "Memory Resource Management in VMware ESX Server". It identifies a problem and devises several clever solutions to the problem. I love papers that make your go "AHA!".

It's not world-changing or even particularly novel, but it's such a simple concept explained very well that really changes how you see the typed/dynamic language divide, as well as language design in general.

This shows how you end up "differentiating" datatypes in the context of strict functional programming, in order to do things like "mutate" lists. It is essentially the same as what mathematicians call "combinatorial species".

Not a paper, and not strictly CS, but Mythical Man-Month by Brooks. It solidified the connection in my mind between systems engineering and software engineering. Other readings since then have extended and changed this understanding, but this is where my approach to software development started to mature.

As an architectural lighting guy, seeing realtime global illumination look this good in a game engine was fantastic. Parts of the algorithm I can understand, parts go over my head still, but the results are amazing.

A big part of what I do at work is radiosity simulations in AGI32 which is of course more accurate (because it's trying to accurately simulate real world lighting results) but much much slower.

This paper develops a precise model for internal iteration of a data structure, such that exactly the necessary information is exposed and no more.

It's a fantastic exploration of improving a well-known design space with justified removal of details. I keep its lessons in mind whenever I am facing code that seems to have a lot of incidental complexity.

Royce 1970 of course: http://www.cs.umd.edu/class/spring2003/cmsc838p/Process/wate... wherein he did not introduce Waterfall, but for some reason the negative aspects of his article became the basis for Waterfall. The article for 1970 is surprisingly relevant although archaic in language. It's worth reading to the end. He wrote this describing leading teams in the 1960s do what I assume was actual "rocket" science.

Olin Shivers's work on various control flow analyses, in particular the paper "CFA2: a context-free approach to control-flow analysis", is a really cool static analysis via abstract interpretation. Matt Might had a bunch of papers in a similar vein.

Growing a Language by Guy Steele (co-inventory of Scheme). Brilliant speech about how to grow languages and why it's necessary. Languages that can be grown by the programmer, like Lisp or Smalltalk are better than languages that fixed like most others, this is why.

The most simple and most effective hash table scheme, and nobody is using it, or even knows about it. Fastest and least memory, but not thread-safe. After 12 years there's still nothing better on the horizon.

My favorite topic was from an advanced user interfaces class. Describe 3 example of a bad user experience where the input in to the system does not give you the expected output. My poor example was a Kleenex box, I try to pull on one Kleenex and it tears or two come out at a time.

Unrelated to the article, but WOW did anyone else see the giant full view height ad video at the top? I have ad blocker off for NatGeo. It's followed by a giant banner. I can't imagine how much crazier ads have been getting and most of us don't even notice 'cause we have ad blocker.

It is much drier than Sahara sand where soil moisture can reach 1%. Dune's scenario would be a piece of cake compared to what you would need to do to feed a colony off of this water. I suspect that energy-wise it would be cheaper to import water from Earth and recycle like crazy.

"In 2009, NASA crashed a rocket and a satellite into a crater on the moons south pole, in the hopes of picking up additional watery evidence. "

NO, In 2008 ISRO crashed Moon Impact Probe (MIP) into lunar surface as part of Chandrayaan-I mission. MIP didn't have any specific role, other than political agenda. The NASA's Moon Minerology Mapper M3 onboard of the Chandrayaan spacecraft was instrumental in providing the first mineralogical map of the lunar surface.

NASA missions like Lunar Prospector and the Lunar Crater Observation and Sensing Satellite and instruments like M3 have gathered crucial data that fundamentally changed our understanding of whether water exists on the surface of the moon,

National Geographic, having titled the article - 'Get the facts' and blatantly ignoring ISRO's contribution is not healthy.

The reason I brought this up is not to score nationalist brownie points, but a credit where it's due. In a country like India, where an organisation like ISRO instills scientific termparment over large population; ignoring it's contribution by global media focussed on science is an insult to entire scientific community.

A lot of conversations about expanding rights, end up with a distinction between positive and negative rights. A right to water, is different to free speech. For everyone to have a right to water, someone needs to provide the water or be held accountable for not providing it. A default, state culprit needs to be designated.

Privacy though, privacy is like speech or equality before the law or presumption of innocence. You have it by default and if its denied there is a culprit.

Even in this case, we seem to have a hard time expanding rights. I say expanding, but privacy is a right in many places, formally. But, the interpretation of that right is very weak.

Anyway if were to make privacy a right with serious intent then there needs to be a willingness to break eggs, bear a cost. The right to free speech, conscience, affiliation , assembly and other political freedoms mean we need to tolerate and protect the proverbial nazis rights to try and spread their politics. Bitter pill for anyone scared of a proverbial nazi takeover.

Are we willing to bear the (fear-based, mostly) costs in the fight against the terrorism demon. The economic costs that will be claimed by companies relying on data? If we have a strong yes, I think we can start building the real framework of laws and conventions that will secure a right to privacy for the next few generations.

A word of caution before being carried away by the words "fundamental right". According to the judgement, the right to privacy is fundamental (as an offshoot of the fundamental rights of freedom of life and personal liberty guaranteed under Article 21 of the Indian Constitution).

Asserting it as a "fundamental right" raises the bar on what restrictions can be put. But reasonable restrictions can still be put.

A separate bench of the SC is now going to test the validity of the Aadhaar Act on the basis of whether the restrictions are reasonable in the light of privacy being a fundamental right.

Edit: The Aadhaar Act is an act that allows for the government to collect biometric and other personal data that can be used to identify an individual for various services (including but not limited to governmental benefits).

As an aside, this is a decent example of why some people oppose bills of rights. As I understand it, the argument is that a bill of rights is considered exhaustive; if it's not in the bill, it's not protected.

By contrast, countries without a bill of rights are free to interpret their constitutions through implied rights, in ways that make sense in the context and allow the constitution to adapt to new or developing circumstances.

"The petitioners, former Karnataka high court judge Justice K.S. Puttaswamy and others, had contended that the biometric data and iris scan that was being collected for issuing Aadhaar cards violated the citizens fundamental right to privacy"

I find the above interpretation of privacy troubling. In order for the government to effectively distribute social services to the needy, cutting out corrupt/inefficient middlemen, it needs a way of effectively verifying someone's identity, so that people can't dual-enroll themselves. Having people provide biometric identifies such as Iris scans, if they want to qualify for government services, seems like a perfectly reasonable way to do this. I would also contend that when most people declare the importance of privacy, they are talking about their actions and lifestyle, not fingerprints or Iris scans. It would be sad if this ruling prevents the government from efficiently providing social services to help the poor.

It seems kind of bizarre to me that as an American English speaker half way across the world, I'm in a better position to read and comprehend the Indian Supreme Court's rulings than a great number of non-English-speaking Indians.

The Indian government has been mandating all utility providers to link the biometric details of the subscribers to their account. With the rising number of criminal cases due to misuse of laws such as those used in marital disputes, the government can easily control what services to provide or deny its citizens based on centrally available biometric database which it could not have done before. Of course this is just a crazy theory for now which can hopefully never happen due to this much needed judgement.

A lot of comments in this thread are misleading. What the Supreme Court has done is that it has expanded the interpretation of an existing Fundamental Right ('Right to life') to include a 'Right to Privacy'. Now this means that if any law is made that infringes on an individuals privacy then it'll be tested for reasonableness.

So before this judgment, the legislature could have for example made a law requiring all internet activity to be reported to the government or criminalized homosexuality (existing law) and anyone challenging the law could not claim that the law violated his privacy as such a right was not recognized.

After this judgment, such an argument could be made and the courts would test whether the violation of one's privacy is a reasonable restriction or not. So a law requiring you to have number plates on your car to be captured by traffic cams, or KYC norms for Bank accounts, reporting of your financial data to tax authorities could be held to be a reasonable restriction whereas laws such as criminalizing one's sexual orientation could be held unreasonable.

What prompted this constitutional reference was the governments 'Aadhar Scheme' which compelled 1.2 billion citizens to hand over their private biometric data to the government if they wanted to claim any government services. This judgment provides the test to be used while deciding whether the law and its applications are constitutional. Most likely the scheme will not be struck down in total but specific instances will be tested on a case by case basis. (eg. Aadhar can't me made compulsory for getting health services but can be made so for a Gun Licence as the latter seems reasonable but the former may not)

To HN readers outside India, who are unaware of the BG behind this historic judgement. It all started off with a series of litigations against the govt's mandate to link India's unique identity system - Aadhar (which includes biometric data) to existing Indian identification systems for different purposes.

In HN fashion, if you are interested to know about how the govt pulled of the huge technical overhead of storing billion records; it could be seen in talk here by it's chief architect - https://www.youtube.com/watch?v=08sq0y8V1sE

> He said that in developing countries, something as amorphous as privacy could not be a fundamental right, that other fundamental rights such as food, clothing, shelter etc. override the right to privacy.

It's a question which has alway bothered me. What happens when two fundamental rights clash with each other?

I'll wait for the full text of the judgements before celebrating. The ruling apparently consists of 6 judgements and should be available shortly. Hope this reins in the Aadhaar monster without leaving any wiggle room for the government to exploit.

Can someone more well-versed in Indian law help me understand : Does this ruling prevent collection of biometrics or restrict it somehow under the Aadhar system?

Also, I recall the Indian Government was pushing Aadhar Pay - a biometric fingerprint scan based PoS payment / verification system (likely it is already deployed, I don't live there so I don't know). What happens to that now?

Privacy is about the freedom of thought, conscience and individual autonomy and none of the fundamental rights can be exercised without assuming certain sense of privacy. argued Mr.Subramaniam.

It would be interesting to see if there are more specifics w.r.t data security in the judgements. I smell a criminal lawyer cooking up the ways this judgement would aid his clients for not opening their computer, smartphones to police!

The judgement would be widely shared worldwide, the people outside India who isn't familiar with Supreme court judgements from India; keep Merriam-Webster nearby.

After 70 years of independence - right to privacy is recognised as a fundamental right by the Supreme Court of India. Individual rights especially privacy underlie the cornerstone of democracy - Liberty. A small step for Indian Supreme Court - a giant leap for 1/5th of mankind!

On the one hand Aadhar is so convenient. If I want a phone number or a bank account, I can simply identify myself with my thumbprints and iris scans and get it activated immediately without paperwork. This has really made things easier for people. Using biometrics also reduces fraud when claiming benefits from the government and maybe makes the process easier as there is again no paperwork, and it is easy to make claims.

But on the other hand, all this makes it so easy to track anyone. All your bank accounts, cards, phone numbers, internet connections would be linked to your Aaadhar number and would be centrally accessible. This is a privacy nightmare. I am already getting frequent messages from my phone company to link my phone number with my Aadhar number, or let it be deactivated.

All this information would be in the hands of government officials. The Indian bureaucracy is notorious for the corruption everywhere. What if you could purchase somebody's data through an "agent" - get access to everything that they do, everything they buy, everyone they transact with, everyone they communicate with, contents of every message they send to anyone at all - imagine the kind of possibilities this opens up for negative minds.

Besides this, someone could just hack the data and maybe leak all of it. Someone recently created an app that would let you get anyone's details including their phone number, address, etc. by typing in their Aadhar number. It was taken down a month ago. I'm not sure about the exploit but it was related to using plain http instead of https somewhere. I checked one of the Aadhar linked projects and found that they were using an open source library in the backend which wasn't up to date, and the version being used had some documented security vulnerabilities. I wonder how safe peoples' data really is.

A large number of Aadhar numbers have already been leaked thanks to government websites. It is possible to extract a person's fingerprint or iris scan using photos of their hands or the face in specific conditions. If the person has linked their bank account with Aadhar which is getting compulsory, one could take out money from their accounts by impersonating their prints or iris scans. Fortunately there is an option to protect yourself from this - go to the Aadhar website and lock your biometrics data. If used regularly this can protect people from "biometrics theft", but the biometrics are unlocked by default, and for 99% of the people they are going to stay that way.

An amazing talk from couple weeks ago on the details of Aadhaar and how it was "implemented" and why this court ruling is super important both in India and closer to the west: https://www.youtube.com/watch?v=iCkhupMROZU

I'm not sure if I read it here, but there was a great example given on how to answer people with approach of "if you are not doing anything wrong, you have nothing to hide".

Using this logic, just recently I somewhat won an argument with my fiancee. She always believes that I'm hiding something on my phone because she doesn't have PIN to it and because I'm unwilling to give it to her, she assumes the worst.

Therefore I made a bet. I asked her what is she doing in bathroom? She answered she does what everyone else's is doing: #1, #2 or showering. I replied you must be doing something wrong or maybe illegal, since you always not only close the door, but you lock it as well! We had short argument back and forth about obviously how it is not about hiding something, but rather about enjoying your own time in privacy, and I think she kind of got it.

We have a bet in place for 3 months now that when she leaves the door wide open while in bathroom, I will give her code to my cellphone. So far I haven't had the need to give it out just yet :)

They also made sexual orientation a right as well. read further into this ruling, it is just more than privacy at the top level. they went on to make sure people/prosecutors understand what they really mean

I'm not sure what this has to do with Aadhaar, although the initial petition mentions it I think.

Aadhaar is setup as a way of proof of identity and not proof of citizenship. I for one did not get an Aadhaar until a week ago! The only reason I had to was a company that I applied to, said that please use your full name as mentioned on your Aadhaar card.

India is a bureaucratic mess. And as UG Krishnamurti put it very succinctly, India is a failed country. As mentioned elsewhere in this thread, we desperately first need to focus on poverty first.

Privacy : You have a right to try to keep things as private as you want. You should not be prosecuted for merely trying to keep things private.

Your responsibility :

1. Don't share things that you want to keep private.

2. Carefully weigh the trade offs when you agree to share things about you. There is no retroactive privacy on things that you yourself shared.

3. You can attempt to retract what was shared about you, but you can't hold society responsible for successful retraction of that piece of information, from media or minds. You can add addendum e.g. an apology from someone, you can claim damages, but we can't rewind time.

Government responsibility:

1. Don't criminalize people trying to keep things private. This would be similar USA Fifth Amendment, do not force people to share what they don't want to share. Government can ask "What crimes you committed in the privacy of your home?", but it can't force people to answer that question or punish for not answering it.

2. You can't plead fifth and deny proving your identity when you want to take food stamps from government, or when you get unearned tax credit. Just like in any transaction, Government can ask you to prove who you are and may demand increasing levels of proof depending on the transaction. Your choice would be to not participate in such transactions, in certain situations you implicitly give permission to Government to demand proof of identity from you, e.g. if you request a loan to dig a well or subsidy to buy fertilizer or collecting unemployment benefit. Security of exchange of money from government to people is Government's responsibility and it may demand increasing levels of identification depending on the nature of the transaction, as deemed appropriate by abused observed or potential for abuse. In places with high corruption rates, strong identification would be required and would be appropriate. I don't think people would be OK if someone collects their pension using just name, address and birth date, and government throwing hands in the air accusing you for not protecting your name, address and birth date.

What you can't do:

1. Make the world forget what it already knows. Can't ask Google to delete a piece of information about you from entire internet, once you yourself post it on Blogger. You can delete the post from Google, you can delete your account, but you must realize that once something is not private, you have no control over who has seen it and how many formats/copies of that information got created.

2. Get into a contract to drop certain privacy and then deny fulfilling the contract because of privacy rights. E.g. a model can't say that she won't show her face on a fashion ramp because of privacy after taking payment. A storybook author can't say that she won't share her book with publisher because of privacy after taking payment.

3. Make a demand that a private entity, on its private premises, can't have monitoring equipment. A store may decide to have cameras at the self checkout lanes, and it may deny self checkout to folks with full face covering. Your choice would be to not shop at such places, you can't use law to shut down the business's ability to monitor their private premise as they wish. An employer may make alcohol breath analyzer test required e.g. for a surgeon before surgery, a pilot, air traffic control at the start of the duty, or a long distance train driver. The employees in this case can't claim privacy rights to deny such tests.

4. When you are in public place e.g. a sidewalk, you are participating in a public endeavor that comes with you dropping the privacy protection e.g. compared to what you would get in your bedroom. The rays of light that bounce off of you or your belonging are fair game to be captured. Photographers do not need to take your permission to capture rays of light that are travelling in their direction when they stand on a public place or a private place they own or a private place where the owner has given them permission to capture the rays. Those photographs can only be used for personal consumption or for non-profit activities e.g. an investigation, news reporting. Any commercial use of the photo e.g. in an product advertisement, would require release agreement from the person in the photo.

I think Strong Privacy and Strong Identification both are required, for some things they are mutually exclusive, in some parts you trade one for another. Authentication/Authorization/Encryption/Non-Repudiation is needed to deliver these rights.

Consider this, if privacy laws are absolute in every aspect of life then you can't have antitrust laws that stop competitors from fixing prices or agreeing to anti-competitive behavior. If privacy laws are absolute then smartphone apps that capture photo/video of an crime unfolding won't be allowed due to privacy concerns of the criminal. If you can keep something private (lock the door to your room, your safe deposit box), no one will force you to expose it, but one can't demand privacy in situations that naturally expose information to others, unless you explicitly set the expectation of privacy (attorney-client, doctor-patient, a service provider) as part of a contract. Government may make laws to cover most common situations e.g. your real estate agent sharing your budget with the seller of the property, your medical records etc.

Privacy law is natural. What I draw and erase on a doodle board in the privacy of my home is my business, you can't force me to divulge it. What I say in my head to myself is my business, there is no thought crime. What I sing when on a trail is my business, no one can force me to say which song I sung. When government or corporation tries to invade the natural privacy, it should be stopped. In that regard, privacy is a fundamental right. But, privacy can't be claimed to hide criminal record from your neighbors or employers.

Windy was coded by billionaire founder and owner of Seznam, which is czech search engine (and media company), one of only three other search engines in the world that still beats Google in local market.

Interesting note, I play a lot of golf competitively, and they've basically recently allowed players in tournaments to use phones (obviously players don't do that much or if at all, concentration and all that).

But the one specific rule is that players can't use their phones to check the weather, and even more specifically the wind direction. Wind makes a huge difference on the course, and being able to know the exact direction of the wind where the ball is flying would be really helpful. Other part is being able to know if the wind shifts during the round. Before you start you can check the wind direction, but if that changes, you could be out of luck. This seems like a perfect golf aide, so much to the point where it's a penalty in a tournament.

This has been my go-to for sailing conditions for a while. I used other sites before it (Predict Wind, Surf Watch), but Windy is fast and responsive and usable on a phone. The data for sites like these all ultimately come from the same sources (the big weather models) so the differentiators in this space are mostly in the user experience.

Other than the whizbang interface there's nothing really innovative going on here as far as actual science... Same with all the other me-too sites that use that same streamline animation code. Some of the visualization is downright misleading, but whatever. The ventusky wave animation is awful and physically incorrect.

As someone who works with mapping data and web based maps regularly, this website is excellent in terms of usability. The ease of switching overlays, adding symbols, saving selection, adjusting the map are all excellent and intuitive. The ability to drill down on symbols added in a smooth and sensible way is excellent. This is how you make web maps for specialist data!

Great website, has been around for some years as windity and windytv. I guess windy will be its final name.I usually find windguru.cz more easy to read, but windy offers a cool visualization that I think gives more context. It's really cool to check it during hurricanes.

I travelled last minute London -> Chicago -> Portland -> Salem to see my first total eclipse. The Saturday night flight to Portland was packed with excited passengers, all talking about the eclipse. The captain (Spirit Airlines) even announced the flight as the 'Eclipse Express', to cheers from everyone.

I was watching the eclipse in Corvallis and, about 10 minutes before totality, a plane jetted around at high altitude and left a contrail[1]. The new cloud, in the otherwise perfectly clear sky, began to drift toward the sun and I thought, "There could be few better symbols of the attitude that some of %0.0001 have to the rest of us than this." Fortunately, even though the contrail did drift over the sun during totality[1], it was very thin (and dark) and did not distract and the event in the least.

I'm glad Alaska Air did this flight over the Pacific and not where they would distract hundreds of thousands of people with their flight. Anyone know how to find out what flight (or private jet) was the one I saw was? Would be an interesting fact to add to my memory of the event.

[2] During totality (https://njarboe.com/eclipse/totality.jpg). This photo was right at the beginning of totality and in no way captures what I saw, but does show where the contrail ended up. I was more interested in experiencing the eclipse than trying to get a photo.

Nice can't wait to run some of our benchmarks against this. Go has the awesome property of always becoming a little bit faster every release. It's like your code becomes better without doing anything. Love it :)

So this new concurrent map? Am I right in understanding it's designed for cases where you have a map shared between goroutines but where each goroutine essentially owns some subset of the keys in the map?

So basically it's designed for cases like 'I have N goroutines and each one owns 1/N keys'?

_A_ device free dinner? Why aren't all of them? These weird little "small gestures" make me feel like people have lost control of their homes.

We have a landline expressly to give out to emergency contacts. And dinner is tech free. Every night.

I think the main problem is tech has made everything _else_ easier, but parenting harder, and parents just aren't prepared to fight the battles / put in the work. A parent who is staring at Instagram when they're at the park shouldn't be surprised that their kids wants screens, too.

This doesn't invalidate the article, but I feel the need to point out that the author lives in a $63 million house with $80,000 worth of TV screens lining the walls. "Anyone in the house can change the screens displays to their favorite painting or photograph, in effect personalizing the room (via lighting, temperature, and even decor) to the guests own flavor." [0]

They certainly didn't do themselves any favors when they built their house.

I'm a retired geek but I do tractor work, I just pulled a 14 hour day and I'm having some wine so salt heavily.

In my opinion, the best thing you can do with kids is create boredom. If they have access to a shop, or some lumber, they will start to build stuff. If kids are bored and they turn that into building, that's the first step towards getting into a good undergrad school. Building stuff is good.

The hard part, as parents, is creating that boredom. It's so easy to give them the video game babysitter. I haven't done well at that. I wish I had some magic statement that made that part easy but that part is supersuper hard.

The other thing I'd say about kids, and I hate this, really hate this, is private school. It's better. My kids went and were on track to go to Los Gatos High School which is a pretty decent school. For various reasons we found kirby.org and both of my kids go there and it's a shit ton better. I hate private schools, I think kids should experience the full range of people, not just the rich kids that get into private schools, but wow, the private school was so much better. So much better. Hugely better. My younger son who hates that school, it's a nerd school and he's a jock, came to me and said "yeah, I want to go there, it's better than Los Gatos". My older kid is applying to schools and he has a shot at the ivys, that's all the private school.

I'm ashamed to admit that I like private schools, but wow, have they been good for my kids.

...the interviewer, apparently, asks young people (age 18 to 22 years old) the following: whether men with the (device hidden) are human or not; whether they are losing contact with reality; whether the relation between eyes and ears is changing radically, whether they are psychotic or schizophrenic; whether they are worried about the fate of humanity."

Can you guess the device? It's the Sony Walkman, and the time is the 1980s.

The arguments aren't new or novel. Hosoda was saying similar in the 1980s, and the Walkman just as easily isolated people in the 1980s as phones do now. yet somehow humanity survived.

Remember pagers? They were the device of choice in the 1990s for kids, and they were pilloried just as bad, and even linked to drugs. There's a pretty decent history of people panicking morally about new technologies and their dehumanizing effects, and generally people have adapted more or less fine.

The hardest thing for me has been young children begging, just begging, for screen time. It's heartbreaking and I know very few parents who have managed to make no mean no. I grew up watching TV but then, TV was mostly for adults and I only paid attention those few hours it was child-friendly. I spent almost the entire decade of the 90s without a television, only going to the movies or the occasional VHS. Even today, while I'm on my computer for 8 hours/5 days a week, I read books when I stop work or cook or just sit and talk. I turn off the screen, close the laptop and turn off wifi on my phone. I worry for the attention span our kids won't have.

When was the last time you let a child just get bored, so they might entertain themselves with their imagination?

On the other side, when we go out for walks or camping or away from tech, it really doesn't take long for the kids to adjust.

I got Internet access in 1990 when I was 12 years old. Completely unfettered, unmonitored, unlimited in any way. And I wouldn't trade it for anything. Certainly many, if not most, kids would end up obsessing over social status and gossip and similar things. That has nothing to do with technology and absolutely everything to do with how they deal with their life in general. They're encouraged to avoid taking an intellectual approach to life, to never question or doubt their emotional impulses (indeed taught that those impulses are more trustworthy and 'pure' than conclusions reached intellectually). They're the kids that have always been popular, the bullies and the kids that get DUIs before out of high school. You can't protect them from themselves through any means if you're not willing to address their way of living.

And for the kids that aren't destined to live an adolescence of bickering and strife, they will flourish with access to the whole of human knowledge and ability to interact with online communities as an equal, without anyone knowing their age unless they choose to reveal it.

Have two kids, 14/12. They've grown up as fully connected kids, have always had access to their own devices, never more than a year or two behind the Big Now. We have never imposed "limits for the sake of limits" on their screen time. I find the idea preposterous, frankly.

We have no issue connecting with them, or doing family things together, or etc etc etc. If you can't connect with your kids when they have iPhone in hand, you're not going to be able to connect with your kids even if you're a million miles from the nearest wireless cloud.

A lot of discussion around this issue is just tired rehashing of the same complaints every generation over the past 150 years has said about the incoming generation. Some of y'all in here already sound like grandparents, lol.

The story actually sounds really fun to play, and fits the Half-Life universe perfectly. Funny that this script has reached the top of Reddit, HN and Twitter within hours of posting, and even crashed the authors site. Even to a blind man it's obvious the demand for the game is there, so it's amazing that Valve continues to ignore it and the fans, but I guess without a cash flow problem they really don't see the point in spending time developing it. A shame.

"Old friends have been silenced, or fallen by the wayside. I no longer know or recognize most members of the research team, though I believe the spirit of rebellion still persists. I expect you know better than I the appropriate course of action, and I leave you to it. Except no further correspondence from me regarding these matters; this is my final epistle."

Trying to imagine playing this, it sounds like they were struggling to get it "right", and it may have kept feeling like a poor cousin of other games. In comparison to HL1 & 2, the plot seems a bit slow-starting, ends on an actual anti-climax instead of a cruelly-interrupted climax, and the game mechanics (snow/stealth, map phasing, time bubbles) seem to suffer from other recent games having done variations on these very well.

Add the inherent disappointment of not having the portal gun that everyone's expecting to be in there somewhere, and it could feel like it was bound to disappoint.

HL1 & HL2 did a very good job of switching genres and game mechanics from level to level, while still keeping everything clear and centred on a simple familiar mystery plot. The levels were able to establish their genres very fast -- usually from the first scene you saw as the doors opened or you rounded a corner. Everything was clear, and in both cases the motivating story was very simple, and the "plot" was setting. You've got to get help; you've got to get to Lambda Complex, We've got to get you through the portal to shoot what's on the other side...

This HL3 plot seems to have got a bit "Lost" (sorry, tv series reference) as people's motivations are uncertain and there's exposition, and an attempt to partially unfold the mystery while always adding new ones... and still trying to make those bug-pod things work as a villain that didn't work in Ep1 or Ep2.

Still, the bones of a good game are there. From my amateur eyes, it just looks like it needed to stop trying to resist/subvert the viewer's expectations, and just hit a few of the notes the player's been waiting for so they can have a note of satisfaction on the way to the new mystery.

I think a new Half Life game would be the perfect opportunity for Valve to showcase their virtual reality kit. So far there are no blockbuster VR games and the Half Life franchise (Portal included) has a history of being very innovative (e.g. HL2 using a physics engine for the narrative, HL1&2: story told through level design). There is a lot of potential to use immersive virtual reality to enhance the story telling.

This would have been a treat for those who had finished episode 2 and were waiting anxiously for the next step in the saga.

Sometimes its simply not possible to do things and fans understand but Valve just shuttered the series and turned their back on fans. It's like Game of Thrones suddenly deciding to close down for no obvious reason and with no explanation to fans.

This reeks more than a little of the arrogance of success and it's in some ways a betrayal of all the gamers who appreciated Half life for what it was and propelled Valve to its initial success.

a lot of complaints about valve. u know what people also complain about a lot... companies milking their intelectual property.... i think half-life so far has left a great legacy. if they ever decide to continue it , it would be sweet. i'd hope it would be in the same fashion, shooter for pc, not VR bullshit. but hey... still enjoying half life 1 and 2+ so fuck all the whiners. be thankful for what you have got, not a needy little baby crying for more!. maybe if u guys behave thankful people like gabe/marc and others involved with what we love would listen.

WhatRuns is a free browser extension that shows you what runs a website from ad networks and developer tools to fonts and Wordpress plugins. You can also follow websites and get notified when they add or remove technologies.

We soft-launched a couple of weeks back and was lucky enough to be picked up by the Chrome team. We were featured on the Chrome Webstore, landing us 12k active users in one week. It was a huge validation and helped us tremendously in squashing bugs and making a finished product. We realise we have a long way to go, and our little team is working round the clock to make it happen. We also launched on ProductHunt today: https://www.producthunt.com/posts/whatruns

Would love to hear what you think :)

UPDATE:

Thank you for all the feedback!

Sorry about the occasional false detections. We are looking into this. This is largely because we detect a considerably large number of technologies/plugins compared to our counterparts. Lots of possibilities for false pattern recognition etc. Rest assured our team is working round the clock to improve accuracy and add more technologies/plugins.

Also, Our servers are going a bit cranky due to the huge traffic we are experiencing today. New websites (that was not loaded on WhatRuns before) are now queued up and might experience 2-3 seconds delay. This is to ensure best experience for our active users.

Noob question: Looking at your competitors' traffic with SimilarWeb: They have all ok to low traffic, none of them really growing. So it might be a hard business to grow since a lot is SEO driven/organic.

However, Builtwith is selling some plans which also include SEO related features like keyword reports. I understand that some might pay for latter but there is even more competition in that space.

What I don't get: Who should pay for your stuff? It's of course interesting to see other stacks but honestly it's not a crucial thing. My CTOs and I know what they are doing and of course we like to get inspired but yeah, at the end of the day tons of research, years of experience, debate and the individual use case decide our stack and not what some random website does. Same for design-relates stuff, btw to find a font-face is just a Command-Option-I away.

So no offense, but I am just wondering why you start a business which is already there, which is hard to scale and which is hard to get paid for.

Congrats, WhatRuns looks very accurate to my tests so far and indeed better in UI terms.

I only have one extra UI recommendation that I think Wappalyzer got right, which you could enable as an option.

When a popular CMS/language/server OS is detected, Wappalyzer will use its icon in place of Wappalyzer's plugin icon. E.g. if Joomla is detected, Wappalyzer's icon on the plugins' toolbar will switch to Joomla's logo.

There's a specific order to this preference that looks to go from the CMS used (e.g. Joomla, WordPress etc.) down to the framework (e.g. Laravel), programming language (e.g. PHP), webserver (e.g. Nginx) and finally the server OS. In other Words, if Joomla is detected, it will be displayed first, not PHP.

The above is extremely helpful for anyone developing for the CMS communities (like myself).

Of course, to maintain your identity as a plugin, you could use a double logo (a mashup of your own and the dominant/higher-level technology detected).

* UPDATE: You should also consider providing a way for anyone to easily suggest new frameworks, apps, CMS extensions/plugins etc. to be detected, by providing a name, icon, description and the way to be detected (e.g. HTTP header, pattern in the HTML output or even HTML comment, linked source etc.).

I'm leery of Chrome Extensions. They are basically just a plot to collect your usage data and sell it to marketing companies. I have disabled almost all Chrome extensions and locked down my browser. I got tired of the super targeted, annoying advertisements that were being thrown at me.

On the privacy side, I could see concern from those using the extension. When the site is not found in their database, the full HTML of the page appears to be submitted to the servers and processed. This is a bit of what you would expect, but may present some concern for cases where a new site is submitted and PII is sent to WhatRuns servers.

Unfortunately some sites that I am responsible for running in production are WP and we try our best to hide this fact and block all admin functionality to the public due to WP's less-than-stellar history of security vulnerabilities. This is the first tool I've seen that has detected it and now I'm stumped.

it is capturing my browsing behaviour because it sends any URL i browse in the background to the whatsrun.com server, even when i don't want to know what software the page is running (means clicking the icon), so Whatruns get's a full browsing history from me (and you even set a UUID cookie to track unique users!).

This is a huge privacy issue! Imagine Whatruns is starting to sell this data!

To replicate simply open the dev-console for the extension and click the network tab.

I can't comment on the detection accuracy because this extension makes an important mistake -- it ignores the actual URL you are on and always performs detection on the root domain. So if I point the extension to a webapp at app.mycompany.com I get results for our marketing site at mycompany.com, which uses completely different (and more boring) tech.

It hangs on an old website, with the following error in the console:TypeError: Cannot read property 'hostname' of undefined at Object.setNoAppsFoundText (chrome-extension://cmkdbmfndkfgebldhnkbfhlneefdaaip/js/popup_final.js:153:22)

If this campaign is as sketchy as their laptop was, no thanks. I already don't trust them because of their misdirection in that project, they have a ton of goodwill and trust to rebuild before they can be taken seriously.

I have doubts, but I've been waiting for a phone like this for a long time. I hope it works out and gets enough funding. I'm glad they didn't make compromises on the OS, hardware switches, etc. I wish more companies would cater to passionate yet non-mainstream markets. It's crazy a truly hackable linux phone doesn't exist today.

The Riot app in my experience has a pretty unwelcoming UI/UX experience and is still insanely buggy. Things like Jitsi integration, widgets and a phone partnership should be after a solid, stable 1.0 MVP IMHO. Encryption is still opt-in and beta.

So super supportive of the environment, the momentum and a native matrix phone partnership is the right move eventually, but please get it stable, fast and polished first before branching out too far.

Is this what the actual phone will look like? Some kind of context photo with a reference hand would be great, because the dimensions and shape otherwise suggest this would be comparable to holding a Kindle or Nexus 7 next to my head!

As much as I want a FLOSS phone to succeed, I have my doubts. Then again, I'm not sure how all the Matrix stuff works, but if I can't simply give someone my phone number and expect it to just work, I don't see how this can happen.

Can someone explain it more simply? Does it completely forego SIM cards? Will it 'just work', or is it more of a 'we made progress in this area, but not a lot of people are going to find it practical' thing like Replicant?

I wish these guys would just use android from the get go. It's already running on i.mx6, so by switching now they would at least have a chance of making a decent product.

They are literally throwing away hundreds of millions of dollars of excellent work that has been put into power optimization for no good reason. I've told the ceo there this like at least 3x but he is stubborn.

> Android is so frustrating! Trying to remove Googles privacy invasion bit-by-bit removes functionality bit-by-bit, and you end up with a non-working phone. Purism will solve this by putting your privacy protection and security first. Zlatan Todori, CTO

This quote is nonsense. There are at least 3 projects (replicant,copperhead & another) that are shipping trees like this.

It will be so much harder for them to create a phone that stays alive for more than an hour or two running a full linux desktop.

> The CPU will be an i.MX6/i.MX8, where we can separate the baseband modem from the main CPU, digging deeper and deeper to protect your privacy and isolate components for a strong security hardware stack.

I'd like to know more about this separation. For example--can the phone boot without the baseband being powered on?

This seems like a really cool device. One that I would certainly purchase.

What weird stretch goals they have. I wonder if these are jokes?"$8m = Signatures of entire team printed inside the phone case$10m = Free encrypted VPN tunnel service for all backers for 1 year$20m = Candy Crush (clone) available for free"

Do I understand correctly that it would be purely VoIP over GSM/2G/.../LTE? Where the particular VoIP implementation of choice is this "Matrix" protocol/ecosystem? Which potentially at some point (with enough financing?) may get some gateways enabling calls between Matrix phones like this one, and regular GSM/2G/... mobile phones?

I can't really afford a new phone right now, but I really want this to be a thing, and I think we're running out of chances for a FLOSS phone to take off, so I ordered one. I really hope it meets its target.

Their UI looks like it's based on Gnome -- I wonder if they wrote some extensions to make Gnome more phone-friendly.

IMO great idea to use Matrix as the communication layer -- especially when double-ratchet is stable, it'll be able to provide the good UX of things like Signal on Android, iMessage, Google Duo, and FaceTime, but built on an open platform. Hope it does well!

I couldn't find the single most important piece of information for me, the battery life. Is that not decided yet or has it been omitted for marketing convenience? I was thinking of signing up to get one but with that missing bit I don't want to take the risk.

Brick and mortar retailers finally got their way in 2012 when Amazon started collecting sales tax in states where it had no physical presence.

This removed the reason for Amazon to avoid that very same physical presence in so many states. Now we have local Amazon warehouses with one-day and same-day delivery, Amazon delivery lockers in convenience stores, Amazon-operated delivery vehicles, and soon Amazon grocery stores.

Most Whole Foods stores Ive seen arent exactly hurting for business and the parking lot is basically full. If their stuff becomes cheaper, thatll drive demand way up, at which point theyll need to have more ways to buy. That might mean building more stores but my guess is that Amazon is expecting online shopping to go up once it becomes a bit too crowded at the actual stores.

"salmon, avocados, baby kale and almond butter" - sounds more like they're going to go the Trader Joe's route: have a few high-visibility loss leaders that give the appearance of generally low prices but higher prices overall.

That said, I'm looking forward to the 365 brand being available through amazon.com. But, like at Trader Joe's, I'll have to re-check packaging to see from where they're sourcing the food.

I believe that top quality ingredients will be strictly reserved for better restaurants and home cooks that are willing to pay for it. There just is not possible production capacity for this level of ingredient that the common person can freely buy which is what Amazons strategy is here.

They can make a splash by lowering the price of a few ingredients with shocking price tags, like avocados. But will be impossible due to supply lines and cost of production for Amazon to turn Whole Foods into a high end merchant of top quality ingredients while maintaining any kind of margin.

If they want to play the Amazon loss game they can for awhile but eventually when a financial crisis hits and they have to rely on cash reserves, this cash poor company relative to their peers will be in trouble.

incredibly smart. There's a bit of a loss after an acquisition for obvious reasons which usually means cutbacks and trimming the fat etc. Looks like Amazon will be doing this but found a way to create a small rush of customers to offset it a bit. Very smart.

I for one will be going in there just to see what has changed. I haven't been in a WF for 2 years (new seasons girl here) mostly because of cost.

I just ordered three 14 pound boxes of cat litter for $20 with free delivery on Amazon pantry. It's frightening how they can operate with margins so thin. If Amazon can to afford to take the same strategy with Whole Foods, Costco, Safeway and all the other regional supermarket chains are in for a world of hurt.

I'm willing to pay extra to eat ethical meats/dairy/eggs and products that generally avoid factory farming. And otherwise eat vegan. By the look of things, Amazon will eventually get rid of most of these things that Whole Foods made very easy for us.

Yes it's more expensive. That's because the cheapest foods that you buy at the cheapest supermarkets are fucking terrible for the livelihoods of animals.

Not a fan of this corporate buyout. Amazon clearly has a much different direction in mind for this chain. I wish they bought Kroger instead.

Probably not the right place to say it, but I really hope Amazon/Whole Foods pays attention to the quality of their hot bar foot. In some locations, it's consistently great, while in other locations, not so much...

I noticed a few comments specifically referencing FTP (and who can blame them since the HN title as of this moment specifically references it). In the first post of the series, the author refers to the server as a "Secure FTP" server, which can be confusing to read[0]. In later parts (and a little googling of my own), it's clear that the server is actually an SFTP server, not a plain-old FTP server.

It's still plenty archaic, but takes the headline's shock value down a small peg[1].

[0] It adds a mental pause -- a Secure ... FTP server. It hints that, possibly, it's a reference to a different aspect of the server's security (a non-technical person might refer to a server as being a "secure" server simply because it's protected by an ID and password, for instance).

[1] Based on my personal interaction with banks and software, as well as several friends who had previously been members of a few banks' IT departments, my first -- very sarcastic thought -- was "of course it works that way!"

I had an integrator request this so I stood up a nodeJS server that only implements upload, not download. This way if they leaked their own password, a malicious actor is limited to forging data, and no real data can be leaked. Because it didn't work in FileZilla, they didn't want to use it. Worked at another company that shuffled data between big name gyms & health insurance companies, it also used CSV files sent over FTP in all directions, to my dismay. CSV isn't even a well defined format, and you get all kinds of impedance mismatches with different delimiter & escaping mechanisms, character encodings, BOM, etc. Other companies will just give you a SQL user & let you go to town mining their database directly. I don't understand whats so difficult about making an API, but sometimes it seems like no one wants to do it. You can't push back too much or they will just see you as a problem & decide not to integrate with you.

I worked on the ACH system at the Federal Reserve Bank. When you're getting multi-gigabyte files from the Social Security Service daily that have many millions of transactions in them, you appreciate the NACHA format's compactness (~100 bytes each tx). We never transmitted files on insecure protocols like FTP, though.

They mention that other regions' inter-bank money-transfer systems (e.g. the EU's) have been sped up to be same-day, or in some cases nearly instantaneous. The US ACH system lags behind, due to the sheer number of institutions that would be involved in a modernization effort. (There are a lot more US banks than there are UK/French/Canadian/Australian/etc. banks; I think in part because a bank that operates in 50 states is technicallyand legally50 banks, and each one maintains its own ACH infrastructure?)

Having spent a few years at a large energy company, I got quite used to the use of FTP servers to exchange--what else--csv files with data. And is a uploading/downloading a file to/from some ftp server really that different from POST/GETing an object to/from some REST service?

Some major news/market information provider solely made their data available to us through ftp. And used Amazon SNS to push a notification that something new is available on that ftp.

There are several companies that provide an API on top of ACH. I work for one[1]. For high volume ACH (like a payroll company) it's usually cheaper to go through an API provider than it is to go directly through the bank. I'm not exactly sure why. Maybe because we handle technical support? We also have better reporting.

One of the challenges for banks is that there is an oligopoly on the software that runs the bank. There are 4 companies that provide the "core banking" software to most of the banks in the USA. The banks get stuck providing you with whatever services one of these four pieces of software is capable of.

I've implanted ACH file format and, worse, FedWire BAI2 file parsing it's absolutely archaic. The worst part is that various partner banks will have differently erroneous variations of their implementation of the BAI2 spec so we had to intentionally code buggy version to match the bugs they had on the other side ridiculous.

FWIW, Swift (the company behind the interbank payment system) has been developing and pushing ISO 20012 as an XML-based long term replacement for their Swift message format, though it's not designed as a replacement for ACH.

For that, there was HBCI years ago (also XML); don't know if it's used much still.

While "part 1" of this series says "FTP" (implying plain-text/unencrypted data), "part 2" [0] and "part 3" [1] both say "SFTP". This is "more correct", in my experience, as encryption is pretty much always used nowadays.

>At Gusto, we rely heavily on the ACH network. For example, when a company runs payroll, well use the ACH network to debit the companys account to fund their employees pay. Once weve received these funds from the company, well again use the ACH network to initiate credits into each of the employees accounts to pay them for their hard work.

Can you use ACH to initiate a transfer between two (third) parties (i.e. you not being one of them)? If not, what are the requirements to be a broker / escrow in between them?

Going from the UK to the US was like stepping back in time when it came to banking. I remember complaining about some aspects of UK banking - it's going to take a day for my transfer to complete!?! Now we have faster payments in the UK which complete in hours at most.

Meanwhile in the US I still had to pay my rent with a physical check because that was easier than figuring out the weird 'pay anyone' implementation my bank had.

1/ See people encounter how these things work, because there's usually a sense of lost innocence about it. (If they stick around long enough they come to understand that dealing with hundreds of years of history is why glib "re-imagine everything" solutions tend to come a cropper).

2/ Continually discover that by the standards of the rest of the world, US banking is even more like banging rocks together.

This is why we need to provide better regulations for adapting better blockchain technologies in various industry sectors including this, banking. Also it's another away of sharing and saving infrastructure costs for bankings while providing better security than traditional banking systems like ACH. Perhaps ACH can be written in smart contract in safer way with security.

Not questioning his innocence, but will the executives involved be held to the same legal standard?

EDIT: FTA

> U.S. prosecutors have charged eight current and former Volkswagen executives in connection with the diesel emissions cheating probe. Liang is one of the lowest-ranking executives charged so far.

> Another VW executive, Oliver Schmidt, has pleaded guilty and is scheduled to be sentenced in Detroit on Dec. 6. Under a plea agreement, Schmidt could face up to seven years in prison and a fine of between $40,000 and $400,000 after admitting to conspiring to mislead U.S regulators and violating clean air laws.

The German automotive industry has a very aggressive culture. Our (German) managers put enormous pressure on us to do whatever it took to win deals. I can clearly remember watching one of our engineering managers turning a very unhealthy shade of green (~= #d2dbc9) as her boss turned the metaphorical screws to pressure her to win our segment's first big deal -- Ironically with VW as the customer.

Shamefully, I stayed silent as I watched my colleagues lie through their teeth to the customer: claiming, for example, that a range of functions were already mature and in series production when in reality they were little more than vapourware at that point. I have worked for enough startups to understand the concept of "fake it 'till you make it" -- but there are limits. In particular, the organisation needs to be prepared to follow through and make good on promises made -- or come clean when it becomes apparent that something cannot be done.

One needs to be especially careful when we are dealing with safety-involved automotive autonomy functionality where the consequences of hastily engineered and inadequately tested algorithms can sometimes be serious. I can well remember getting the distinct impression that the engineering team for these early products were essentially being set up as "fall guys" -- we took tremendous risks to win that early business, but it was clear that we would be very much isolated and alone when or if any of the potential downsides were realised.

So purposefully modifying cars to illegally pollute to huge levels, causing deaths and disease from said pollution gets a few years in prison? The range of time people are charged with is so arbitrary and usually inconsequential with mass scale white collar crimes, and totally excessive for the poor and/or minorities. Its insane.

While I approve of strict punishments for such terrible actions, I worry that this is a case of a "scapegoat" being advertised as punished, so that the real decision makers higher up can go with lighter reprimands. Since the scapegoat is guilty of the crime, the deception is much easier.

I worked on a 'sharing economy' service writing software. We had a project that boiled down to "we've found a legal way to start skimming tips without the users realizing it". It was technically legal. It was completely unethical. It was bullshit.

I called management out on it, said that I did not want to wind up like a "rogue engineer" at VW when the shit hits the fan, because I felt our upper management would gladly throw us all under the bus. "Don't worry, this is all perfectly legal within the contracts". They didn't understand the difference between "this is illegal" and "this is unethical".

I quit the company. I won't put loyalty to a company head of ethics, morals.

These VW engineers? Maybe the executives were the big fish who made the call, but they were the ones who agreed to make it happen. They are just as liable, in my view.

(I hear the project was eventually canned. The users who were beta tested on it immediately realized what was happening and threatened to quit.)

I was watching a german program on the VW scandal and was surprised to learn that the German government allowed VW to issue a software patch to correct the problem instead of offering to repair or compensate owners for the cars they purchased. Even with the software patch the vehicles still released emissions that were well above the norm.

Also, I don't believe any VW executives have ever been charged in Germany.

What advice do you have for someone who's on a H1B visa, wants to co-found a startup, incorporate it, work on it part-time until it receives VC funding, and continue working at their H1B "day job" in the interim?

My understanding is that the H1B visa does not allow you to do any work for anyone apart from your visa sponsor. If a co-founder were to spend his evenings working on his startup which has been incorporated, I'm not sure if that would conflict with the above regulation, and if so, how to work around this.

Hi Peter, thanks for sharing info on what I feel is quite an unknown subject to an outsider. I have a general question for you:

As a skilled software developer with a relevant UK university degree (3 years BSC) and work place experience, interested in working in the US - What is the ballpark range of costs and wait time involved in getting a visa to allow me to work for a US company.

What's the general procedure, - get offer from job, then -talk to immigration lawyer, or the other way around?

I won the green card diversity lottery and will finalize the process in about a month when I land in the USA. After that I'll be a permanent resident. I'll stay for a few months to set things up, leave for a few more months to sell some property and then move there permanently.

If things don't go very well and I decide to relinquish the green card and return, will I be subject to any kind of exit tax?

Also, I'm having a lot of trouble setting up an address to receive the physical card. A PO Box or mail aggregator is not acceptable and I can only change the address up to the point of entry. This is a major concern for me because I don't have anyone in the USA that could receive it on my behalf.

Is it possible to use "General delivery" near my arrival airport to get the card? I ask because up to 2 weeks ago I didn't even know about that concept so I'm still exploring that possibility.

I'll finish my PhD in ECE around May 2018 and look for employment in the Bay Area. Do you think I should apply for NIW (I have over 300 citations and 10+ peer reviewed publications) or go through the process with H1B?

My wife and our son are on F2 visa right now. My wife is a computer engineer and was not allowed to work on F2 visa during my PhD. Can she work as an H1B dependent? Do you think it is worth to spend $10K to obtain NIW?

I've used up all 6 yrs on H1b (including recapture) and currently transitioning to F1, I'm exploring the O-1 visa. I'm an Indian citizen, born in India.

Question 1: Can I qualify for an O-1 visa if I'm part of a company as a co-founder/CXO that's been accepted in Y-Combinator or similar programs? (does that satisfy the "attained membership in associations that require outstanding achievements....."?)

Question 2: In the mean time, if I want to register a company in the US (for liability reasons) to release a free app in the app store, can I do it under my current visa status (change of status from H1 to F1) if there are no plans of monetizing the app in the near future?

I am going to sign off now but I'll be back on again this weekend to respond to any final questions and comments. As always, it's been a pleasure conversing with everyone. I always learn something. Thanks.

I have one company in India and another in US and I am the CEO of both. Same name and same directors (Indian citizens) but not a subsidiary .The India company is 4 years old and US company 1.5 years.For the US company we have US Bank account , Tax filing etc. I visit US in B1 frequently , what is the best option for me to pursue Green card. Is L1A a good option?

Hi Peter, thank you for doing this. I am in the process of starting my own company in Norway, and I am planning on applying for YC within next year. I am wondering, if we get accepted, what will be the best way for me to legally start a US company, and work for it in the US for three months? I've heard that H1B will not be possible since there would be employer - employee relationship, so what would be a good option? You did mention O1, but I am afraid I do not qualify since Inam straight out of university.

YC says that they accept 10+ non US companies for each batch, do you know what visa they use while in the US?

I'm in US on a TN visa and started a company (no revenue/no employees) to list apps on app store. When I went to renew my TN status, Immigration officer gave me a bit of a hard time saying I needed approval to open this business from Homeland Security. Is this true?

As a Canadian citizen married to an American - would voluntarily abandoning your permanent residency (say, to live in your home or a third country for a few years) make it more difficult to re-obtain permanent residency in the future?

Hi Peter, I'm working in the US for a big tech company with an H1B1 visa (something like a lightweight version of the H1B, but exclusive for people of Chile and Singapore). My wife is a US citizen. I want to apply for a Green Card, but I'm not sure what's the path that I should follow. Is it more likely to get it through my marriage or through my employer? Is there any otherrelevant reason why should I choose one over the other?

Hi Peter!, first off, just wanted to say thanks for all the help you provide here! Its nothing short of amazing!

I'm currently on an H1B, but I'd like to set up an ecommerce store with a friend. I understand that itself may not have enough grounds to get an O1 visa. Is there any other workaround for this scenario?

Because Peter is probably too modest to self-promote, let me do it for him: working with him is great, it was completely friction-free and we got our employee's visa situation handled very, very quickly. Highly recommend.

Hi Peter. I am a US permanent resident and will be moving and starting work in US soon. My wife is not a permanent resident or citizen of US. I'm aware that I can apply for an F2A visa for my wife, but she will have to wait outside the US for nearly 2 years.

Is there a way she can be in US, with me, while she waits for her permanent residency?

Things we had considered: a) she can stay in US and work with a US company (unlikely) b) she can stay in US and work remotely with her company outside US. c) she stays in US and takes up studying d) she stays in US and just waits.

If I want to enter the US under the TN visa, do I have to get a job offer that says its only for a period up to three years (the max TN term)? What should the job offer letter say about the period of employment, if anything at all?

Hi Peter, has the been any uptick in RFEs and denials for H1 applications and transfers under the current administration? Have there been any other noticable changes for startup immigration under this regime?

Consider the following scenario: a foreign (non-resident alien) founder gets funded in the US, standard C-Corp. The founder and the entire team are based in another country.The founder comes to the US once in a while for somewhat extended stays (~1month) to fundraise, do deals, etc. Regular B1/B2 visa. But expectedly, they will still do a bit of work in the meantime. Are they in violation of the B1/B2? If so, would being paid by a foreign subsidiary help it?

Hi Peter, currently on my 3rd E-3 visa with the same employer as a 'Public Relations Specialist'. My employer is now starting the green card process. They are preparing the application, and have updated the job description to reflect my current responsibilities and minimum requirements.

My BA major is a field called "Performance Studies', which is an obscure interdisciplainry sub-field of Social Sciences and Humanities. My specific research is directly related to my job - experiential marketing in nonprofits, and my employer considers this a "related field" to Marketing, Communications or Public Relations major.

My issue is that my employer does not want to list "Performance Studies" as a required major in the minimum requirements, but my lawyer is recommending we do this to avoid a denial. What are my chances of approval if we list minimum requirement as "BA degree in Marketing, Communications, Public Relations, or related field" - with Performance Studies + my specific research as the 'related field'? I'm finding it hard to get advice from peers as most people I speak with applied for their GCs as engineers or mathmatics majors - your thoughts are much appreciated!

Would applying for a green card on a TN Visa be considered a violation of the non dual-intent of the visa and prevent me from renewing/applying for TN Visa at the border?

Context: naturalized Canadian citizen (Indian born) on a TN Visa working in the states.

From what I have read, green card applications are determined by country of birth, and for India are upwards of 3 years. So, I would like to know if an application for a green card would jeopardize future TN Visas at the border.

I went to Los Angeles and did a summer job at Axiotron in 2008. Then I tried to convert to a student visa when I moved to study at UCSB for an exchange programme.

My address changed, and I never got a letter asking for proof of funds. The USCIS didn't recognise the letter from my parents' bank.

I petitioned to reopen the application when I found out it was denied. I waited for months, and eventually was given 30 days to leave the country because I didn't have $25,000 cash in my own name (I was 19 years old. I still don't have that much money now). Thankfully I was already scheduled to leave 7 days later - the process had taken the entire year, so I finished my exchange programme.

I think that means the US kicked me out, and I can never get a visa to go back. I did travel there as a tourist once, over land from Canada just in case.

Is it worth ignoring any opportunity to work in Silicon Valley because of that bad experience? I'd rather work in New Zealand or Canada or (stay) in Taiwan anyway.

Please all H1Bs of Indian nationality. Do not waste your time, age, money and family life waiting for H1B based GreenCard. Its a lost cause. Move to Canada, Australia or somewhere else. Live a good life rather than being indentured servants for US corporations for a good chunk of your productive age. I moved to Canada some years back and I am really happy about my decision.

If a recent college grad is on OPT, and lets say they majored in actuarial science (STEM), are they allowed to make money selling crafts and art that aren't related to the major? I know there's a clause for jobs unrelated to your major, but I wasn't sure if this applied to selling art or having art showings. How does this work?

Hi, post docs in the USA often hold J1 Visas. Afaik this means: no intention to immigrate, and no right to start a business. But life plans may change, and thus both these may become problems. What advice do you have for entrepreneurially minded researchers w/o the right to act on it?

Thanks for answering questions. This is sort of an oddball question, but does the case of Xytex Corporation v. Schliemann in 1974 still hold much bearing these days on immigration and technology employment?

I was told by Mr. Perera, that it was one of the first cases in this field and he was always proud to have been involved in it.

I'm here on an H1B. My employment sponsored I-140 was approved over 180 days ago.I'd like to change jobs, and my prospective new employer has renewed/transferred my H1B (not approved yet, but I have the receipt).The I-140 application has not been transferred

However, the day I intended to resign my current work I got a notification of interview from the USCIS (to take place in the next month or so).

The interview I'm told may result in getting the greencard on that da, or they may need up to 5weeks for additional review.

What happens if I do follow through and change jobs in the days prior to the interview, does that have any affect?

What advice would you have for someone who wants to immigrate from Canada to work in Silicon Valley but does not possess a post-secondary degree? Is there a particular visa that is well suited for tech workers without a post-secondary education?

I know that while on H1B, I can co-found a company as long as I can demonstrate a employer-employee relationship. But what can I do if I want to apply for green card under my company, if I only have enough qualifications for EB2? From what I understand, Labor Certification for EB2/EB3 will not go through if the applicant has significant shares of the sponsoring company.

I am Indian citizen on H1-B with GC EB2 priority date of 4/2011 and approved I-140. I've been with same company ever since in US. Is there a way to make my GC processing go any faster? A lot of my friends are in similar situation and are eager to star a company. Many thanks!

Hi Peter, I got a L1B visa a month ago, i.e. I'm an intracompany transferee and I want to know if I'd be authorised to work for other companies in the US in the future. If so, should I have to get a different kind of visa? What's the process like? Thanks!!

I first had my H1b approved several years back. I was with Company A for 2 years before I moved back to India to work at another company. I then came back to the US where I worked on H1b (same visa) for Company B for another 2.5 years. So I've used up around 4.5 years on my H1b that was first issued in 2007. I left the US and am now based in Canada.

I recently got another offer from Company C in the US. Does Company C need to apply for a new H1b, or can they simply transfer the current H1b I have? Note that my current H1b (that was sponsored by Company B) expired in May 2017, but I still have 1.5 years that I can use on it before the 6 year limit, as far as I understand. I hope my question makes sense.

Hi Peter,I'm a Iranian PHD student and got 2 offers from a big co and a startup. Both companies were very excited on having me on-board, but now both have decided to not move forward with my export license. This comes at a terrible time as I just got my opt and now need to find a new job. Can you shed some light on the requirements of export license and costs associated?

Could you please comment about changing employer after obtaining employment-based Green Card?

It's considered to be safe to work for current employer for at least 6 months after getting GC. However there is no such legal requirement and there's the AC21 Act. Also I've heard about 2-year period after getting GC: if applicant worked less than 2 years for sponsored employer, he/she should prove his intent to work permanently. After 2 years USCIS should prove lack of intent.

Thank you for doing this Peter! How does the H1B transfer work when switching jobs? How do I make sure that I can stay in the country while switching jobs and I don't have to wait 3 months to get an approval.

Hi, I received a full-time offer to work as a software engineer at a company in the Bay Area after my graduation in May 2018. They are willing to sponsor my H-1B, but I am also looking into other options in case I don't get it.

Now, I am an undergraduate student in the UK. I am also finalizing the contract with the same company for remote part-time work (20 hours/week) during my final academic year (around 9 months of work). I would be on the EU Payroll of the same company.

Hello Peter. First thank you for taking the time on addressing these very important questions. I am sure these are very emotional topics for many and we appreciate your help.

My question is: What is the process after submitting DS-260 and supporting documents on Immigrant Visa / Consular Processing. My interview should be scheduled in Tbilisi, Georgia for which I believe visas are current. I would appreciate if you could advice with approximate time frames for each step.

Hi,I worked on a H1B from 2008 until 2009, then from 2013 until 2015 for a different company, all in all I have used very close to 3 years. 1- Can I reactivate my H1B at any moment to work for a company in the US for another 3 years ?2- On a H1B is it possible to work in the US for 1 week per month and the rest remotely from abroad? Does it have to be at least 2 weeks per month? 4 weeks per month?Thanks so much for doing this!

Are there any drawbacks that you can think of with regards to having a green card instead of staying on H1B? The only thing I can think of is the fact that having the green card can potentially mean that there is potentially an exit tax to pay for high net worth individuals, assuming one wants to leave the US after more than eight years.

Also, if someone is on H1B and ends the visa (eg. break in employment), do future H1B applications have to go through the lottery again?

Hi Peter, thank you so much for this! I have questions about the O1 visa.

1. If a startup sponsors an O1 for a founder, will there be any issues with 1/ the O1 founder having the CEO title 2/ the O1 founder owning between 30-50% equity in the company? Is it effectively the same as if I was on a green card?

2. If the startup that sponsors the O1 substantially pivots to a new idea, what are the implications for visa status? Does it require an entirely new application?

Background is I showed the consulate all my sales reports. I'm in a niche where almost all worldwide sales are in the us. So about 85% of my sales are from there.

They're not considering the documents. They want a report from an accountant or an auditor. I'm producing that, but given their extreme skepticism so far, I'm wondering if there's something else I should be doing.

Hi Peter,I am a Vietnamese student studying in the US with the F-1 visa. Recently, together with my American partner we opened a startup using Stripe Atlas.

The visa doesn't allow me to work in the US so I'll go back to Vietnam in the next four months to work on the product. But when I go back to school, what is the best way for me to work legally? I know I can apply for OPT but it would take me up to three months to get approved. Is there a better solution?

Hello Peter: I'm a U.S. male citizen who married a Mexican single mom. The child is a U.S. citizen as well. We're in the process of getting my wife a green card. Because of my job in L.A., and because my wife's business interests are in Mexico, we have a commuter marriage. She intends to stay in Mexico until we're empty nesters. Will that be a problem in the interview when getting her green card?Thanks much

Peter, for an E2 visa, when demonstrating that an investment is "substantial", how does the accounting work for intangible or in-kind contributions to the business? Is it possible to include the value of time spent building the initial product (while outside the US)? Does it make a difference if the applicant had a foreign company to do the initial development and paid themselves a salary?

I am currently in USA on L1 visa and I also have an L2 visa and EAD which will expire in February. Is it possible to extend EAD while entered to USA on L1 ?And is there a way to check validity of the visa: when I got L1 visa my B visa was stamped as canceled but L2 remained and I want to make sure that it's still viable.

If someone has:- a full-time job in a foreign country (say, Germany) that sends him in the US on a 2 months mission,- a part-time job in the US (1 day per week), which let him work from Germany but also in the US under a part-time O1 visa (accepted).

How should that person enter the US for the 2 months trip? Under an VISA waiver B or O1 part-time?

Hi Peter! A friend of mine has just started OPT and has founded a company. She plans on using the STEM extension too, which was a successful path for myself and some others, but the recent changes to the STEM extension seem to be considerably more limiting now. Do you have any guidance on options for founders considering the OPT and STEM extension route?

Hi Peter, Thanks for doing this. 1. Since premium processing for many H1B categories is suspended are tech companies looking to wait out 2 or so months to wait for the USCIS approval to hire an H1B(assume in this case that person cannot work on the receipt of the H1B application)?2. What processing times are you seeing currently for H1B petitions?

Hi Peter, thanks for your time. Any specific tips for the E3 process when US employing entity is brand new? We are an Aussie company that has been around for 7 years. We are setting up a new US entity and first hire in the US is an Aussie. In your experience would a new entity face any extra scrutiny?

Thanks for doing this Peter. I am currently thinking of accepting a US computer programming job (part in US, part in Canada) and am thinking of using a TN visa to travel back and forth (1 week per month in US, 3 weeks in Canada.) Do I need to be concerned about what might occur if Donald Trump et al decide to drop NAFTA?

Hi Peter,I recently moved to US (from India) on L1B. While I was in India, I had an app on App Store and making small money. I developed this app in my personal time and this is not related to my job. Can I continue improving the app (in my personal time) while I am in US on L1B?

Hello Peter! I am a US Permanent Resident since 1997. Should I be concerned about traveling internationally at the moment, US politics being what they are? Or is it safe to assume that if I leave the country for a short while, I'm not going to get turned away at the border?

The year is 2050. You are reading this comment from a compatibility layer in your open-source browser that translates HTML from the 2010s into Thought-Interface Language 3.2, which was an open standard ratified in 2045 by a global consortium of content and browser developers.

Back in the 2010s, web access was peculiarly gated in a dendritic configuration as ISPs provided all the single-points-of-failure interconnections between end users (including both content providers as well as consumers) and the true "internet", a multiway resiliently-routed interconnect of servers. As we know now, extending the peer-to-peer core of the internet down to the consumer has had lasting impact, including breaking up the routing monopolies of the ISPs as well as making it possible for anyone willing to spend a few grand a year on server capacity to host a new peer-to-peer router for nearby Internet users.

Many of you may not remember the origins of Google as a "search engine", a monolithic index of "every reachable page on the internet." Such a quaint idea has long since joined even further historic concepts such as Yahoo's "human-curated list of pages on the Internet". Ever since the Searchtorrent protocol was introduced and consumer searches were conducted on one of several competing distributed hash tables across the internet, no one entity has had to shoulder the responsibility of storing all the web content on the internet. This author gladly pays a small monthly fee to a local search cache provider for reliably fast localized caching of search results.

The web is here to stay. Remember your history next time you visit the local Homo Sapiens preserve and give thanks to the carbon-based beings that invented the Internet.

>If youre over 50 you might just remember the birth of Google, with their famous motto Do No Evil.

I love how people misremember this motto. The original slogan was "Don't be evil" which is quite different and far more subjective to start with. Now they have updated it to "Do the right thing" and you can imagine how easy it is to dance around that.

But people seem to think Larry and Sergey were actually trying to be ethically meticulous. Nonsense--the slogan always had the subtle meaning of "Don't be Microsoft-level evil" and it turns out that was not an easy hurdle to clear.

I disagree. Cryptocurrencies have shown that the new generation (as well as the old one) can embrace new and decentralized technologies.

The decentralized web is already a "successful" idea. The correct implementation for its wide use is not there yet. But it will be there.

It is just a matter of time before we have a bigger "dark web", a decentralized web, decentralized payment networks, and still have Google, Facebook, and the likes.

As the internet population grows, and as people move to more digital lifestyles; the people won't be limited (or gravitate) to a single portal. Instead, they'll spread over different networks/infrastructures for their different needs. Facebook can still be successful and grow while the decentralized internet happen.

The Internet is growing both in number (population) and in use. People today use the Internet to surf, chat, read the news, buy stuff online, book flights and hotels, pay taxes, work, study, find partners, buy drugs, etc...

I like the idea of "rebooting the web". If things continue in the direction they are going now, I could see many forms of the internet existing. Just as the Darkweb exists, I could see other splinter networks and technologies taking shape as the internet we know now becomes more homogenized, whether it is because of giants like Google and Facebook or government control (oh god pls no) or any other factor.

I still fondly remember looking at Nike's newest shoe offerings in 1997, waiting for the photos to download and listening to my dad complain about the phone line being tied up. I looked at my girlfriend the other day in fact, and just went "god, think of how different the internet is now compared to when we were younger. What the hell will it look like in twenty years?" She called me a nerd, but still considered the question. Exciting and sightly terrifying thought to ponder, really.

If I know anything about the future, it doesn't look like the present.

The web won't look like it does now in 2050, and neither will the internet.

But it might very well be built on webassembly on browsing engines cum operating systems on top of hypervisors on top of verified microkernels, and the web will probably be delivered on top of HTTP/2 on top of TCP/UDP and so on. The layers probably won't change that much.

If things really are so dire in...33 years, then it won't be Facebook or Google's fault, it'll be the fault of hundreds of thousands of hackers who had the technology available and did nothing because everyone knows those two are unbeatable, despite the fact that the tech gets cheaper and more accessible every single day.

We've got a long way to go. They're not unbeatable. They're massive goliaths, yes, but they also bloated and slow to adapt, can't focus on any one thing, and don't have consumer loyalty. They can be beaten. Not saying they will, but they can.

Side note, Halt and Catch Fire, which has always tried to be technically accurate starts focusing a lot on the early web in season 3 and 4. CERN, NeXTcubes, and related all make an appearance. It's a fun watch if you are interested in that stuff. The pilot starts with them reverse engineering an IBM PC.

If you reduce the details of the story into the statement "the future of the web will be driven by anti-trust", I'd probably agree. The _present_ of the web is driven by anti-trust, and there's always more consolidation.

Where machine learning, social networks, and advertising have economies of scale, a tolerable future for the web would necessarily involve diseconomies of scale. Personal connection, concierge service, local long-term engagement with communities.

Servers are only going to get cheaper. Programming is only going to get easier. If anything, things like search engines and social networks are going to become more competitive.

If someone has a genius idea for making a better engine, he won't work for google, he'll create his own.

Implicit in this fear of centralization is a kaczynskiist belief that "everything that can be invented has been invented".

People predicted some company taking over everything forever, and in fact even before the web existed sci-fi-authors imagined a centralized network, were from the servers to the software everything is provided by the government. It's never going to happen.

The rebooted decentralised web sounds exciting, but it's hard to deny that there are large number of projects that only Google can carry out. At what point does the dominance becomes irresponsibly large and requires intervention?

If history repeats itself, then some new technology will take Google and Facebook by surprise. And let a new player rise to the top.

AI is the obvious elephant in the room here.

If in 10 years Apple, Amazon, Tesla or some new startup has the better AI, then this AI will search and present content better. And market it better. And monetize it better. It might also produce its own content. Perfectly customized interactive 3D surround sound content.

Mayb it will be some decentralized autonomous organization that lives on a blockchain. Driven by AI, doing its thing. Outside of what a human mind can understand.

If I was to bet, I would bet that in 2050 the web will be mostly replaced by some kind of VR network with a lot of sound, 3D videos and interactive objects. The web as it is already decays due to tons legacy cruft, insane complexity of doing trivial things, oceans of bad content and hyper-centralization. And all of these things are getting worse every year. VR is our best bet for a clean start.

This scenario contains a lot to unpack. Let's try to extract some of the claims:

1. Most websites will get little to no traffic.

2. Consolidation will eventually result in a mere handful of verticals remaining, in the author's opinion, solely Google and Facebook.

3. At first, content framing tactics, like FB Instant Articles and Google AMP, will result in these providers obviating the need for users to navigate outbound links; instead, the content will be surfaced from within the ecosystem.

4. Content providers (i.e. "publishers") go along with the above because in truth they are desperate for revenue. Giving away content for free in exchange for the potential of display ad revenue due to high volume is seen as their only realistic hope for survival, making this a coercive relationship.

5. Some strange political speculation, but, notably, the two giants banning people and services who have presence on the other. Also, independent newspapers get bought out and absorbed.

Out of this distillation of claims, #5 is complete baloney more egregious than an industrially-processed slice of knockoff Mortadella; beyond even a fanciful fantasy of how these companies work. Claims #1 through #4, on the other hand, are very astute predictions, or rather, observations, as they're already here.

The long tail of websites is pretty long, and most sites indeed get very little traffic even today. One need not look further than the power of communities like HN and Reddit to slashdot all sorts of sites by overwhelming it with legitimate traffic. This brittleness and inability of some sites to scale to momentary demand, along with ISPs forbidding home servers and the risk of malicious denial-of-service means that the original way of self-hosting sites on the Web is largely dead [1], or at the very least, a risky call. This unfortunate fact means you probably want to pay someone to host your site instead. Though there are thousands upon thousands of professional hosting providers, it's a dramatically smaller number than the number of websites; so we're slowly walking up the tree of vertical consolidation.

#3 is well-documented, and #4 follows naturally from the tribulations of finding business models that work on the web [2].

I stand by the view that #5 is too much of a leap; willingly excluding potential customers seems like an act of folly -- Home Depot doesn't ban anyone who shops at Lowe's, but instead they'd love to lure them away. Orthodox airlines in the US at time of writing might as well be regarded as quadrupoly: they have suspiciously similar ticket prices for most non-hub destinations, and they have semi-secret programs to offer matching frequent flier status to the topmost tier of most profitable travellers if one wants to jump ship.

Nonetheless, there is in fact a real emergent phenomenon in the continued vertical consolidation of content silos. Apple -- mysteriously absent from the author's narrative -- is the exact sort of player whose excellent products, dedicated fanbase, and seeming benevolance will result in the sort of transformations that the author fears: Apple has doubled down on producing original content [3] for its captive ecosystem, following the tactics of Amazon and Netflix, but unlike them, Apple's presence does not extend horizontally to other platforms. In fact, we just had a trending article [4] which covered in-depth the different tactics companies use to achieve reach and retain customers.

It's more believable to envision a future similar to what happened to major US television networks: NBC, ABC, CBS, Fox, and Turner; lots of mergers and intrigue, phases of ownership by movie studios, phases of ownership by seemingly unrelated enterprises that pivoted to holding companies from something else, acquisitions in efforts to form new verticals; and yet despite all this, there's still several of them. They're all deeply vertical now, but their valuations and regulatory pressure keeps them existing side-by-side.

Plugging my near useless Python library that does this and a lot of other subtle, annoying things to break programs. The library is essentially a display of how much Python actually exposes to the user and how modifiable it is.

Summary: Integers in python are full blown objects. Small numbers are stored in a central preallocated table where each entry represents one number. Setting a variable to a small integer makes it point to an entry in that table. Multiple variables may point to the same small integer objects in that table. Fooling around with the table leads to funny results.

Kellogg and Bourland describe misuse of the verb to be as creating a "deity mode of speech", allowing "even the most ignorant to transform their opinions magically into god-like pronouncements on the nature of things".

Bourland and other advocates also suggest that use of E-Prime leads to a less dogmatic style of language that reduces the possibility of misunderstanding or conflict.

Alfred Korzybski justified the expression he coined "the map is not the territory" by saying that "the denial of identification (as in 'is not') has opposite neuro-linguistic effects on the brain from the assertion of identity (as in 'is')."

> The current implementation keeps an array of integer objects for all integers between -5 and 256, when you create an int in that range you actually just get back a reference to the existing object. So it should be possible to change the value of 1. I suspect the behaviour of Python in this case is undefined. :-)

does anyone have any idea how they chose that range? it's a 262-wide block starting at -5, which seems incredibly arbitrary.

> We can use the Python built-in function id which returns a value you can think of as a memory address to investigate.

> [...]

> It looks like there is a table of tiny integers and each integer is takes up 32 bytes.

It is the memory address but it's a "CPython implementation detail: This [return value of the id() function] is the address of the object in memory."[1]

Though you cannot use this to determine the size of an object, or rather you "shouldn't" because that assumes a very specific implementation detail, which isn't there.

If you'd like to get the size of an object, use sys.getsizeof().[2] Also keep in mind that containers in Python does not contain the objects themselves but references to them so the returned size is the size of the object itself only, non-recursively. Read "Is Python call-by-value or call-by-reference? Neither."[3] for some more details.

I wrote a blog post about this in the past. It's really fun going through the oddities of the language like this.

It caches small integers, but also literals used in the same interpreter context (I'm probably getting that last term wrong). You'll get different results if you run these in from the shell as opposed to executing a script, try it out!

Ruby does automatic promotion from Fixnum (native size) to Bignum (arbitrarily large) and uses one bit of the native size as a flag to identify this which is why 2^62 - 1 is the max instead of 2^63 - 1. Though I think this is only true of MRI and other implementations handle it without the flag bit.

Perhaps one difference from Python is that in MRI Ruby Fixnum doesn't really even allocate an 'object', the object_id is the value in disguise. In fact all 'real' objects have even object_ids and all odd object_ids are integers:

Fixnums are typically stored inline in data structures (like lists, arrays and CLOS objects). Bignums will be stored as a pointer to an heap-allocated large number. Data has tags and thus in a 64bit Lisp the fixnums will be slightly smaller than 64bit. Bignums can be 'arbitrary' larger and there is automatic switching between fixnums and bignums for numeric operations.

I remember implementing this too on Nova 1200. When the address space is bigger than the memory, you can place those integers outside the memory. Those objects do not actually exist in other words. Saves you memory cycles too, because you can calculate the numeric value from the address.

> That is suprising! It turns out that all small integers with the same value point to the same memory. We can use the Python built-in function id which returns a value you can think of as a memory address to investigate.

Unfortunately this blog post seems to miss a great opportunity to show you how you should compare integers for equality -- using the equality operator `==` and not the identity comparison `is`.

EDIT: odd, this post attracted a lot of downvotes. Please help me learn how this post could be improved.

Cool examples, but I'm not super concerned about the problems arising from the ability to 'use ctypes to directly edit memory'. It's actually pointers to memory blocks, not the memory contents itself: https://docs.python.org/3/library/ctypes.html If you're advanced enough to need to handle pointers to memory blocks in your python program, you are probably good enough to know not to create problems with the behavior of iterators on ranges.

The next morning, my girlfriend and I ventured out on foot to attend a yoga class. The yoga studio was closed, so we went to the Waffle House next door and got something to eat. They told us that their water supply had been compromised, but had made special arrangements so they could make coffee safely. We enjoyed a hearty breakfast.

We learned only later that some areas of town had gotten two feet of rain, that houses and businesses were underwater, that multiple dams had been breached, that we should boil all our tap water for a week, and that most people in town were without water entirely. My employer (the University of South Carolina) shut down for a week.

I am a huge fan of Waffle House. I love their hash browns. There are none on the West Coast. It is one of the few things I miss about the East Coast.

I am incredibly impressed. They tend to look like glass boxes,* typically with at least two outer walls of large windows. One back wall is bricked in on the kitchen side and one end wall where the bathrooms are. The front end may have glass on three sides. This is where extra seating goes.

They typically have a galley kitchen with bar seating in the middle and booths to either side that are readily served by cooks and wait staff, plus that additional seating area towards the front (towards the main road, usually).

They have sort of a bad reputation as a dive restaurant, probably in part because the building looks so much like a glorified trailer and the prices are low, but the food is good and I have always loved the chain. My dad grew up on a farm, so I come from humble people who like down to earth places like this. Reading this article makes me feel really good about being a huge fan, in spite of the classist contempt some people have for the chain.

You would think that a place that (very often) looks like a glass trailer would not be the first thing open after a hurricane. So, I am astonished, but happily so.

That's a lovely study in contingency planning. This is something that always strikes me when doing research in line of my work, how many companies are simply totally unprepared for even the most obvious things that can go wrong. It's logical, paying attention to the happy path is where the business is, it is where you grow. But if you only concentrate on the happy path you're a small step away from a disaster.

Now, natural disasters such as this one are much harder to plan for and deal with than the issues that could affect your average start-up. When you're moving and selling atoms the chain from raw materials to revenues is a very thin one with many weak links just waiting for the right conditions so they can break.

In contrast with that, in IT most disasters are man made and easily protected against. And yet, few companies do.

The mention of the Waffle House Index reminds me of the Big Mac Index used to compare currencies. The idea is that a Big Mac, being something of commodity made from common ingredients, should cost the same everywhere in the world. If it is significantly more expensive or cheaper in another currency, than that currency must be over or under valued.

While not in the food industry; I work for a national auto parts supplier which of course has more than just auto parts; large scale natural disasters are something to behold. From the store teams who check both company and privately owned stores for needs to staging the warehouse with goods the local community will need as well as the warehouse as well; that can be as little as a semi trailer based generator.

still as with all organizations what also comes up is spinning off demand to other stores and warehouses. getting those outside the affected area ready to help those affected and so on.

the key though is communication and you must develop a well defined plan of how to communicate. this means from who, how often, the methods of delivery, and with email it can me using specific templates to make it readily apparent everyone is on board.

anecdotal, with family and friends of who volunteer for fema, there are just "constants" that they also use for knowing when things are getting better, some are restaurants and other are simply commodity foods and goods.

This is very interesting and mostly true but I think I speak for all coastal Texans when I say it's not really a good measure of how we're doing, especially those of us that have lived through several hurricanes.

The original site was heavily degraded and was host to mostly a single species of invasive grass, introduced by ranchers for pasture. This invasive grass presumably was preventing native shrubs and other woody plants from re-establishing. The fruit company was permitted to dump 1000 truckloads of orange peels over 3 hectares (30,000 square meters) in exchange for donating 1,600 hectares of primary forest.

What's interesting is that the authors hypothesize that the acidity of the orange peels actually altered the soil pH enough to kill off all of the invasive grasses and allowed the native plants to reestablish themselves. Unfortunately the authors don't test this hypothesis directly, and the intertwined effects of {acidity, asphyxiation of invasive grasses by several tons of material, massive additional organic nutrient input} can't be teased out due to the experimental design. Though with effects like this, I'm sure restoration ecologists would be happy to receive 1600 Ha of old-growth forest in exchange for dumping rights on 3 Ha of disturbed pasture land, even if they don't know the exact mechanism.

Tangentially, if you want to reduce the waste that goes to landfill at your home, compost.

One of the best ways of composting that I know is bokashi. It works by a fermentation process. The biggest advantage of it is that you can put any organic matter into it, except maybe bones. Yes, you can even put meat, dairy, rice, pasta etc. in addition to fruit and vegetable matter. Once the bucket is full, let it ferment for a week, drain the liquid once a day, and then bury it. The fermenting for a week means that it breaks down much quicker in the ground. This means that all that waste becomes usable and useful to the bacteria and plants in the soil much much quicker than if you were to compost it the traditional way. It's a difference of weeks vs. months.

One of the biggest concerns is smell, but honestly it isn't so bad. Because it works by a fermentation process you need to keep the environment anaerobic i.e. lacking oxygen. That means keeping the bokashi bucket sealed most of the time, so you only need to smell it when you open the bucket to put your waste in once a day. Besides, the smell itself is as bad as you might expect. It's like a strong pickled smell, reminiscent of vinegar.

Also, the liquid that you drain can be used as a fertiliser. You just need to dilute it 1:100, and pore it at the base of the plants when watering.

At my buddies old house in Saratoga Ca, the soil in the backyard was super dry. What we ended up doing is just composting all the greens out back; watermelon shells, cantaloupe, onions, really just anything non-meat. To agitate the process we just went out with a shovel and stabbed the section out to make the compost smaller. By the time he sold the place the soil was super healthy.

It cites an interesting court case by a rival company that occurred after the contact to dump was signed:

"But a year after the contract was signed during which time 12,000 metric tons of orange peels were unloaded onto the degraded land TicoFruit, a rival company, sued, arguing the company had defiled a national park. The rival company won the case in front of Costa Ricas Supreme Court, and the orange-peel-covered land was largely overlooked for the next 15 years."

Forests aren't always a panacea. Sure, rainforests mutually support others in similar tropic and subtropical bands with rainfall patterns, but in the melting tundra forests of Russia, there is an effort to deforest permafrost to keep it from melting using large herds of animals like bison or reindeer to kill the trees because plains are less insulated in winter than forests. Without restoration of tundra back to plains, it's likely larger swaths of land will melt and collapse into a moonscape.

Wonder if this would be good for where major wildfires have been such as the Lake and Butte fires in California in 2015 - the places where it burned strongest consumed all organic matter in the soil...

This mirrors my experiencing gardening in Georgia. We have about half an inch of topsoil there, and below that is pure red clay with close to zero organic matter in it. If you want to have a garden, you add compost. I added a couple thousand pounds of well composted manure, and double tilled it to mix it in down to a couple of feet. The result was that I had loamy soil that grew vegetables like nobody's business.

Put enough compost on a plot and it will turn into a jungle, just like my garden did after I stopped tending it. I was afraid to go in there to pick tomatoes.

There are these Clarity icons, a short way down on HN on the same day there's a 'Feather' icon set, there's Font Awesome, and there are the Glyphicons that come with Bootstrap.

This seems like a lot, and I'm wondering what the dynamics are that cause icon-set-designers to say "Hey, we just finished our own set - let's release this!" Is there good publicity? Are they hard to monetize so they may as well give it away and hope for publicity? Are they easy to make so lots of people make them and some fraction decide to give them away?

I'd love to get insights from someone who understands this better than I do :)

I'm looking for a new icon library. From this list[0], I've choosen fontawesome, but it's a whole 936kb! I'm curious if there's any serious efforts at tree-shaking in icon sets. A folder of SVG files sound like it'll tree shake better than a font file. In my short peruse clarity appeared to be for angular 2+, but I didn't see any details on easily building only with the icons you need, but at least you can pick from smaller icon sub-sets.

They should define aliases for certain icons. For example, I've searched for "save" and found nothing, however I did find a floppy icon. Or maybe the floppy icon should just be renamed "save" since it doesn't really have any other use today.

Being 11 years old isn't an issue because the chess memory observation is interesting data. The age is significant because virtually every pop psychology & brain book in the last 10 years mentions this chess study as one of the anecdotes. (Similar to how all the pop psych books mention the Invisible Gorilla, Marshmallow Experiment, Stanford Prison Experiment, etc)

As someone who interviewed dozens of bootcampers I can give some insight.

1) Most of the bootcamps teach the basic of front-end, and a lot of them, teach by doing without explaining a lot of basic concepts. They are basically money grabbers and throw these people out of the door as fast as possible to get their money back.

2) A lot of the students are in for the money, not for the love of the job. Sorry, but it's true. I don't mean to include everyone, but out of 30 students, only 2 kept working in my previous company, they weren't good developers, but they loved the job and kept learning and improving themselves. Nothing wrong with pursuing a good paycheck, but in developers world in order to keep being relevant in the next 2/3 years you need to keep sharping your skills.

3) The best developers I worked with are the self-taught guys/girls who learn on their own before they even hit the college. The most boring developers I work with, the the ones who only learn to code while in college and never developed any interested to learn before that. But the most horrible devs I work with came from boot camps. Not their fault, it's just how bad these bootcamps are.

Attention code boot campers: if you literally have any other relevant experience for a software engineering job, my advice is to just leave the code camp off your resume and focus on that. And then, your experience from the code camp will come through positively in the actual technical interview process.

Hiring managers are under a deluge of underqualified code boot camp candidates, who are trying to effectively get past resume screens using all sorts of tricks. The blow back from that (I'm speculating here) is that code boot camp folks are probably often being screened out early at a lot of places since its just too difficult to assess them on paper based on the extensive grooming their resumes and github profiles get by their mentors at these bootcamps.

Instead, my advice would be to clearly label the work you've done at a code camp in your README files and include a link to your github. Explicitly call out on your resume projects you have done on your own, and talk about those in your cover letter. If you have literally any other projects or experience related to software engineering include those on your resume and emphasize them!

But highlighting your code camp and trying to tout it as "highly selective" and "accepts only top 1% of applicants" and all that stuff may be doing more harm than good at this point. The well has been poisoned by enough under-qualified people applying that ultimately need to be screened out via an on-site interview, which is time consuming and considered a failure of the hiring process, since they have been set up to pass the resume screen and the initial phone screens. So my advice is to just leave it off, consider the knowledge a secret asset, and don't risk inadvertently damaging your own application!

All it takes is for a company to be burned once or twice by misleading applications from a coding boot camp candidate to just auto-screen out all resumes from them in general. It sucks, but I found myself looking at resumes that seemed genuinely good, but as soon as the boot camp was listed there, I no longer had trust in what I was looking at. So be mindful, there may be folks out there who would have normally had you interview but didn't just because of past negative experiences with others from your code camp, or other boot camps altogether!

"But the coding boot-camp field now faces a sobering moment, as two large schools have announced plans to shut down this year despite backing by major for-profit education companies, Kaplan and the Apollo Education Group, the parent of the University of Phoenix."

Not sure "despite" is the right phrasing here. I'm a DBC grad from about three years ago, and I know DBC had various issues, but I do suspect that there was pressure from Kaplan to expand rapidly, crank prices, and decrease the quality of the education (because that's how Kaplan made their money everywhere else they've been profitable). Kaplan bought them right before my cohort started, and there was serious concern even then that it was a bad omen. I think a lot of startups in general would do well with less pressure for rapid growth, and I suspect that the bootcamp market is much the same.

The rapid expansion was likely a huge part of why those coding bootcamps closed. Coding bootcamps generally aren't major educational institutions - they're usually more like recruiting agencies that happen to target people who are about 9 weeks of serious effort away from being entry-level developers. That is, they're limited by the number of high-quality applicants that they can find and vet, not by the ability to recruit teachers and refine their curriculum.

I think traditional education, university, has a lot of flaws. But I also think one reason college/university has been around for a "long" time is that the system has merit. That merit in my mind being grading rigor, deadlines, classmates, and office hours.

I haven't done a bootcamp, but I've done quite a few MOOC courses so my opinion is based on equating the two. This might not be a valid assumption.

To me, MOOCs and I'd imagine Bootcamps are good to get an intro to something new, but they can't replace rigorous study of a defined base of fundamentals...i.e. a course of study.

What seems like a real opportunity are programs like OSU post-bacc in comp sci, and GA Tech comp sci masters. I've been toying with OSU for about 2 years now, and haven't commited to it yet because they don't have some of the courses online that I'd like to take (computer graphics). And I haven't done the masters program at Tech yet because I don't want to get into that without a more solid foundation. To me, more schools with post-bacc in comp sci and expanded course offerings (online) would find themselves flooded with demand. Recently I signed up at UCLA Extensions...It is almost the right thing, but still has a limited offering and isn't quite the right fit.

Most coding bootcamps are a complete waste of money. Ever since I launched https://edabit.com, I've noticed a ton of traffic coming from these bootcamps and I'm not sure what to think of it. On the one hand, I like the free promotion but on the other, I can't help but feel as though these people are getting ripped off (considering they can access the site for free).

I came from a mix of technologies (approximate knowledge, but a master of none) before settling into the data sciences (Python ecosystem).

Galvenize has been precisely the challenge I was looking for and so much more. I barely got by the interview process, but picked up a Veteran Scholarship worth half my $16,000 tuition along the way.

Quit my job as a Salesforce case jockey and currently trying to stay afloat with the curriculum.

It's intense. As it should be. I want it that way. Otherwise everyone would be doing it. A counter argument might be that; "every one and their mother is taking a 'Data Science' title for the pay." From 2013 to now, that may have a lot of truth, but they are eventually found out I hope.

As for my cohort. There is a Physics PhD, undergrads in Mathematics and Physics, an acctuary Statistician from an insurance company, a Mom, and Biology grad, and a Veteran. An elclectic group I feel.

At this point, half way through, we just want to survive and not wash out.

Finally, once capstone projects get rolling in a couple weeks and whom is left of us should be feeling pretty good about their accomplishment and competent to take on entry and junior level Data Science roles.

Who gives a shit where or how you learned to code. What a stupid and indirect way to filter candidates.

Just show me what you've done. Send me repo links, production code, or even just the end result with some sort of proof that you actually built it. That's the best part about any craft - practicing or building something creates tangible results. If you don't already have a portfolio, get to work on that before you start applying.

I've always had a problem with the phrase "learn to code." It assumes just being able to encode a thought in a programming language is good enough. It isn't. Having a thought worth encoding is what separates "super junior" devs from ones who actually know what they're doing. Being able to formulate worthwhile, efficient solution to problems takes more than 12 weeks to learn. There's no substitute for time and experience. One of the reasons traditional college works well is that it forces you to spend a long period (~4 years) immersed in the discipline: thinking and reasoning about computer problems. Do you come away from that with everything you'll need to make production software? No. That still necessitates experience. You will, however, be able to learn from that experience more efficiently because you've got the fundamentals.

I work at a large company that absorbs tons of MIS/CIS grads. The non-CS grads that excel are the ones that are constantly hungry to teach themselves new things but for the most part they suck compared to the CS people. I can only imagine how much worse these bootcamp folks must be.

Sort of off topic, but in the second picture the whiteboard says "Ajacks" (and not ajax, for asynchronous JavaScript and xml). Makes me wonder how fast and loose they were playing with instructors, unless this is just a staged shot, or maybe a joke I didn't get.

Not that you need to know what the acronym stands for (hell, no one I know uses the technique to get xml anymore anyways!), but it is weird to see the details wrong in such a detail-oriented discipline, and wouldn't people wonder where the name came from?

I went to 2 bootcamps The Starter League Web Dev '12 and Mobile Makers for iOS Dev '14. Neither of the bootcamps are still in existence today. I feel fortunate to have gone through them when I did. Post Bootcamp I did independent work for 5 years, Web/Mobile Dev, before joining a startup.

IMO Bootcamps are great, people try and add up the cost and run the numbers, but the experience of growing and struggling through stuff with others, in an in-person environment, was enriching.

If there was a Kotlin Bootcamp in my area I'd be interested in learning more.

Can someone remind me why boot camps were even a thing? What employer wouldn't be more impressed by a work sample? One possible plan:

1) Define success before doing anything. Pick specific companies or a specialty you want to end up with first.

2a) Take the three months (or however long you can afford to invest) and work backwards. Figure out the most impressive and relevant project that can be reasonably be accomplished in that time. Make sure it's part of a hot trend because the time spent is probably the same regardless.

2b) In parallel, network and build the best quality contacts you can in your target area. Blog about any interesting observations made as the project progresses. Come right out and say in your posts that you are doing this in hopes of building skills, experience, and proof of your abilities, and say where you want to end up.

3) When its done, make it public online make sure it presents with visual appeal and with cogent explanations. Ask for feedback from your contacts, because that's your excuse to show off: "sure was challenging but I was able to go from zero to learning and building this in three months!". Ask for feedback because it's your chance to ask about openings and interviews. Ask for feedback because you really will need it.

How does somebody new make a good project choice, when taking into account everything that would actually be impressive and help land a job would require years of experience to know? You can't without feedback or validation, so don't make the mistake of choosing solo. Just ask people who are in the target area to help you decide. Lots of people would be willing to give input and it's more chances for networking and follow up conversations.

I think it's just natural consolidation in an early-stage market. There's only going to be room for a few major brands in the space, and then some niche players in particular regions and specialities. A lot of also-ran bootcamps are just trying to cash in in the meantime.

I remain a big believer that bootcamps will continue to supply a lot of the talent in the tech world, particularly in app development. I know too many success stories to think otherwise.

The struggle to choose your domain: Web, Mobile, Embedded, Databases etc. The struggle to choose the programming language(s) / technologies to learn. The struggle to choose in which order and from where to learn them (for free). The struggle to choose your Code Editor / IDE.

That's a lot of time spent learning about technologies to make the right decision for you, and googling for answers.

But guess what you'll be doing as a Software Engineer :)

I didn't go to one but I imagine those decisions are being made for you.But I know about the struggle, and I remember it with pleasure and satisfaction.

It's almost as if putting huge expansionary pressure on (mostly) locally-minded organizations causes them to fail. Who knew?

A couple of cherry-picked examples don't mean the coding bootcamp industry is going anywhere. IMO, there's no reason to panic until these bootcamps are being advertised alongside x-ray technician jobs during midday reruns of Judge Judy.

Now, the glut of junior devs entering the market - that's something to be a smidge concerned about.

At the annual Denver Startup Week I notice about a third to half of the startup industry is incestuous, that provides facilities for developers to work better. These are mainly code academies and coworking spaces. I feel if the tech industry ever ever has one of its periodic down turns, then this it could rapidly implode with all these developer services.The reality is about half of the workers entered the tech industry since its last major down turn during 2000 - 2003 and live in the fairy tail land that it could not happen to us. Welcome to economic reality, suckers.

> Two specific coding bootcamps that 1) had been acquired by for-profit education conglomerates and 2) underwent relatively rapid expansion into second and third tier tech markets went out of business.

> As evidence of a larger trend, this article cites a single quote from the CEO of "a private lender and an alternative accreditor for the fast-growing boot camp sector." This is unpersuasive.

This article quotes Ryan Craig of University Ventures vs Rick O'Donnell of Skills Fund, to nearly the same effect.

So: big name closures of two bootcamps with similar pain points and acquisition contexts, vs. seemingly healthy expansion and unchanged placement rates for schools like Flatiron (mentioned in the article), Hack Reactor, App Academy, General Assembly, and so on. Further consolidation in the field wouldn't surprise me, but are we really observing a trend worth reporting on?

It's easy to bash these code camps saying they don't do a good job at teaching people the "real stuff" like traditional education does. People graduating with CS degrees are probably on average better than a boot camp grad but they likely spent 8x the time and money. And most of then still need a lot of investment from their first employer.

Education is broken on both ends. Spending 4 years teaching yourself will get you way farther than a CS degree will. Code boot camps need to focus on teaching people how to self learn because that's really the only thing separates average engineers from great ones.

Firstly, I work with a ton of people who came out of the Recurse Center (formerly Hacker School) and I've been consistently impressed by them. It seems to be a pretty self-paced thing, so maybe the people who do well there are the sorts of people who are excited about what they're learning, and I'm sure there's a layer of filtering at the hiring level. Just my personal experience.

Secondly, I think that while learning specific skills (app development, web development, etc) are really good things, they should totally be a layer on top of a core foundation of good computer science concepts. "Learning web development" ideally means "learning how to apply the concepts I know to the web environment", not "learning how to build a website The Right Way". I totally blame inflexibility in these bootcamps not from their inability to adapt their course material fast enough, but on their lack of good conceptual foundations.

The article talks about the closure of a couple of bootcamps and then holds up a few others as examples of ones that are getting it right.

However, it fails to give any solid evidence that any of these exemplar schools are any more stable/less likely to close than DBC and IY. Are they profitable or propping themselves up with VC money while trying to find product/market fit? If they're profitable, how profitable (and for how long)? What did they do to get there (layoffs, campus closures, pivots to a new focus)?

The quality of their grads aside, I wouldn't be surprised at all if some of the schools mentioned in the article also end up closing their doors in the next year.

Many of us like to compare software development with other engineering disciplines. I like to compare it to playing music. Sure you can learn to play some simple songs in a couple of months, but it takes lots of effort to master an instrument. Like musicians and sports players, there is no easy way to become really good as a software developer if you don't start as early as possible and dedicates your life to it.

Coding bootcamps are just trying to sell a short path where it is not possible to have one.

The US Navy gets some IT work done by Information Systems Technicians. It's not clear from the job description, but they seem to be something between software developers and sysadmins. They have 24 weeks of training, which is comparable to a long civilian bootcamp. Anyone know if they're any good?

I realize everyone is generalizing but it's easy to lose sight of something really simple: Different jobs require different skills. Not everyone on the team needs to know how to make complicated architecture decisions or write the most efficient sorting algorithm. Personally, I'm happy when there is someone on my team who can do the things I find boring...even if they don't have a "passionate" interest in learning more.

Same thing happened during the dotcom boom/bust. Newly-minted MCSE (Microsoft Certified Systems Engineers) were coming out of the woodwork with almost no real experience, and immediately getting well-paying jobs ($80k-100k at the time). As soon as the bust hit, those certificates were worthless and those people moved away from the Bay Area in droves. They didn't have any real passion for the job, they just knew that they could make good money if they passed a few tests. We called them dot-com migrant workers, at the time. The traffic was actually pretty decent for 3-5 years on 101.

I assume the same thing will happen with bootcampers, because I feel like many of them that I've met lack the passion for CS that the better programmers have. I know a few bootcampers, a couple from HackBright and a couple at my work. One of the HackBright graduates never found a job and went back to her old job which was kind of depressing considering how much she spent. But the others were decent, not great but decent. But I would prefer hiring a good fresh grad with a couple of internships under her belt over any bootcamper I've worked with, mainly because of the depth of knowledge they would bring to the table. Let's be real, how much can someone learn in 12-16 weeks, compared to someone with 4 years+?

Another reality check for the boot camps: students are finally getting wise to some of the issues. A major camp in NYC decided not to offer another web dev course because enrollment was so low. As the raging issues in tech with respect to ageism, sexism, and racism get more play, prospective students think twice before leaping. I can only offer NYC observations, and not an official study, but employers won't hire much over 35 here. And if you are a woman or of color to boot--- forget it, its like trying to be a famous actor in terms of opportunity for you. The bootcamps are more than happy to take the money of people of color, women, and over age 35 beginners without a word to newcomers who ask these questions before they enroll- these concerns are brushed aside. What is worse is that if you look at the employees at several of these boot camps, the faculty is under 35, and often 95- 100percent white and male. Often, if there are women employed there, the women are not just young, but the youngest. Or perhaps they have a woman faculty member, but she is kept off their website. (specific NYC example- can't name names, but just look at the boot camps' websites in nyc and you will find the one. - the one with no women listed as faculty- they do have one woman- they just keep her from public view on their website) This kind of blatant discrimination looks very amateur in NYC where companies who have already been sued try to be a bit more subtle about discriminatory practices) these bootcamps staff themselves in a mini replica of this industry as a whole- not w an eye toward positive change re:inclusion. But they preach it all day long. Perhaps they think they are really helping without taking a good look at themselves. I'm trying to err on the charitable side. And I'm not saying no one at all hires women or people of color or people over 40, but nearly all the boot camp founders tend to be middle aged and then hire staff and faculty who are under 40, white and mostly male. If asked, I'm sure the words "culture fit" would get uttered. I'm frustrated at watching too many exceptional programmers in these categories who have lots of valuable experience get passed over for younger, whiter, and more male folks w no experience. A former restaurant server with zero teaching experience/ability/empathy should not be hired as faculty at a bootcamp over a CS grad w teaching experience who former students love. I was "taught" by such a person at one of the top bootcamps. He and many others there had no clue how to teach. Their idea of teaching is to verbally code up a project while a student listens. He was not an exception - he was the rule there. The lawsuits are already in the pipeline, and that will provide an official data set soon. Whoever escapes these will get to move the model forward and hopefully do something about these issues and drive some positive change- not just give it lip service.

These boot camps are just this decades iteration of the "tech/programming schools" of the 2000s.

In the 2000s, tons of programming schools were created to exploit the lax student loan programs. They would bring in countless people and charge then $10K or $20K or even more to teach "programming" that you could realistically learn on your own. But it was so easy for people to get student loans and it was so easy for these schools to get people low-end programming jobs, that they made a killing off of it.

After the financial crisis and the crackdown on ineffective "programming/tech" schools ( especially their student loan programs ), these guys rebranded and remarketed themselves into "boot camps" which take a portion of your future wages.

When times are good, pretty much anyone who can type of a keyboard can get a job, but when a recession comes, these "boot camps" graduates are the first to be let go.

So my guess is that these boot camps expect a recession soon and are cashing out. Maybe they are the canary in the coal mine. Maybe the tech bubble is going to pop soon. Given how much they charge (10 to 15%) of the wages for the first few years of their graduates, I doubt they would leave so much money on the table unless they feel a shift in the economy or hiring is coming in the near future.

Once recruiters figured out that placing these under qualified people into jobs was not a good thing, the bootcamps started to train people how to fake their resumes. Just like how people figured out that if they want a specific job, they only need to update their resume to include all the keywords in the job description.

When I was hiring and screening hundreds of people, applications would show 'projects' completed on them as if they were real work. Hired.com was full of this for a while until they started blocking them there too. People would go so far as to make a domain name for a class project appear as if it was a real company. You'd go look at their github and realize they only made a few commits towards the 'project'. How is that a demonstration of skill?

I'm sure a few people have come out of these bootcamps with some real knowledge and skills. But this is the edge case, not the norm. You can't shortcut your way to a well paying software engineering career by spending a ton of money.

The level-2 driving that Tesla is pushing seems like a worst case scenario to me. Requiring the driver to be awake and alert while not requiring them to actually do anything for long stretches of time is a recipe for disaster.

Neither the driver nor the car manufacturer will have clear responsibility when there is an accident. The driver will blame the system for failing and the manufacturer will blame the driver for not paying sufficient attention. It's lose-lose for everyone. The company, the drivers, the insurance companies, and other people on the road.

Tesla's system doesn't have enough sensors. Musk forced his engineers to try to do this almost entirely with vision processing, and that was a terrible decision. Vision processing isn't that good yet. Everybody else uses LIDAR.

I've been saying for years that the right approach was to take the technology from Advanced Scientific Concepts' flash LIDAR and get the cost down. I first saw that demonstrated in 2004 on an optical bench in Santa Monica. It became an expensive product, mostly sold to DoD. It's expensive because the units require exotic InGaAs custom silicon and aren't made in quantity. Space-X uses one of their LIDAR units to dock the Dragon spacecraft with the space station.

Last year, Continental, the big century-old German auto parts maker, bought the technology from Advanced Scientific Concepts and started getting the cost down.[1] Volume production in 2020. Interim LIDAR products are already shipping in volume. Continental is quietly making all the parts needed for self-driving. LIDAR. Radar. Computers. Actuators. Cameras. Software for sensor integration into an "environment model". They design and make all the parts needed, and provide some of the system integration.

Apple and Google were trying to avoid becoming mere low-margin Tier I auto parts suppliers. Continental, though, is quite successful as a Tier I auto parts supplier. Revenue of 40 billion in 2016. Earnings about 2.8 billion. Dividend of 850 million. They can make money on low-margin parts.

The only industry to have produced truly driverless public transportation systems is the rail industry. Not aeronautics. Rail systems happens to be my business and what I read here makes me very worried.

I don't think the majority understands what safety means in mass transportation. It's not about running miles and miles without accidents and basically saying "see"? It's about demonstrating /by design/ that the /complete/ system over its /complete/ lifetime will not kill anyone. In terms of probability of failure it translates in demonstrated hazard rates of less than 1E-9 /including the control systems/. This take very special techniques and if that could've been done using only vehicle sensors, it would have been adopted by us long ago. I am also sorry to report that doubling cameras and sensor fusion will not get you an acceptable safety level. We've tried that too, rookies.

Is it "fair", to use Elon's argument? After all, isn't additional safety enough compared to existing situation. Ah but we have been there too! For driver assistance it is indeed better. Similar systems were deployed during the second half of 20th century (e.g. KVB, ASFA, etc). But the limit is clear. It only /improves/ driver's failure rate. It does not substitute for the driver. If you substitute, you have to do much much much better. Nobody will ride a driverless vehicle provided the explanation that it is, you know, "already an improvement when compared to a typical driver". Is it fair? Maybe not, but that's the whole point for entrusting lives to a machine.

Shameless plug: I've been writing a series that's WIP about writing a tiny optimising compiler - https://github.com/bollu/tiny-optimising-compiler. It tries to model as much as possible, and the aim is to show off the power that modern compiler ideas bring: SSA and polyhedral compilation.

May I suggest considering Forth as a substrate? Or will that get me voted straight into neckbeard land?

I've had a lot of fun hacking Forths, writing my own languages didn't really click until I finally had a serious look. Forth skips on most of the complexity of getting from bytes to semantics; even Lisp is complex in comparison; but is still powerful enough to do interesting things; and the best foundation for DSL's I've come across.

I appreciate the effort to have a web page talking about writing your own compiler. Right now, I'm looking for an easy way to add define-use chains to a compiler. I know it involves breaking code into blocks and tracing through the code looking for use of the same variables. This tutorial is good because it adds a symbol table on each block level, which helps in differentiating names that are re-used in a block and don't refer to other variables of the same name. Does anyone know of code that makes clear what is involved in def-use for a variable without saying "this is an exercise for the reader" ?

On HN this might be attractive because people tend to complain about Windows only software, and all of Valve's samples use SDL and in theory run on Mac OS X and Linux. The OpenVR example is a little bit more verbose, but it does have some extra functionality, like drawing render models (controllers) provided by the API.

Can someone give me a non-graphical use-case for learning Vulkan or just GPU-based programming in general? I've heard of hardware acceleration. Is it something like writing your routines in a language like Vulkan and offloading the computation to the GPU?

I understand the need of being explicit, but why not include some reasonable defaults? When you need that last drop of performance, you could opt out of those defaults, but specifying everything by hand...

OK, OK, I understand. Perhaps Vulkan is a compiler target. It was never intended for someone to write Vulkan code by hand. Just code in some higher-level API or language and get everything compiled into low-level Vulkan calls. I surely hope so, for the sake of my sanity. Right? Right?

My understanding of the attempts to bring back the mammoth from extinction is partially driven to preserve tundra permafrost. The mammoth removes the insulating snow layer.. seems like there might be easier ways, but it might work...

I wonder if an expert could chime in on why this land (and carbon) would not be taken up by larger, more developed plant life, such as larger shrubs or trees. It seems conditions would be ideal: fresh, bacteria rich soil, high CO_2 levels and plenty of water.

Is there a site out there that details what the actual scenarios are and how to survive them (if at all possible)? I mean if there is a giant methane burp from the ocean floor, can I and my family survive by wearing gas masks for 7 days? I honestly have NO IDEA. Just curious if anyone has gone through this methodically and developed survival plans.

In his book "The Beginning of Infinity", David Deutch gives a hypothetical example of a technologically advanced space ship arriving to an empty spot in space, a cube the size of the solar system. It is such an inhospitable spot, and yet this civilization can create everything to live there. There are billions of tons of hydrogen atoms in that "empty" space, hence fusion is harvested, new elements created, etc etc. The point that Deutch is making is that problems are inevitable, and they can be solved by use of knowledge. It's not the end of world, people.

Unfortunately for the author of this amazing (and cringeworthy) exercise, skeptics often put in a lot of effort, but receive practically no reward compared to the con artist.

Even the people who agree with and appreciate the effort of the skeptics are few compared to the number of irrational people who gladly follow the cons.

This is a bit radical (and probably outside the interest of HN), but if we somehow could remove personal financial need from society, imagine where our technologies might be driven?... instead of where they typically get hijacked and driven.

Television was once lauded as the future best thing to ever happen to education, but we know how that turned out. Same with the internet and web. And so on. Blockchain technologies are now the hottest and most ideal way to build a scam, because the audience is greedy and the technology is well beyond the comprehension of the average person.

A blockchain is at the end of the day nothing else than a distributed database where everyone can run nodes.

This is a hell of an effort and everyone who is using this 'feature' needs a very good reason and use case which justifies the costs.

Every new ICO should answer the question if a distributed BFT database is required. If yes is the benefit big enough? If not then just forget it. Or invest and sell the first minute this scam is on an exchange.

About Storj and similar coins: why do I want a decentral Dropbox? If I want to host my files in the cloud, what is the benefit of having a distributed decentral BFT db?

I followed the storj reddit for a while, since I bought into the crowdsale with Bitcoin. (Since sold. There was some kind of pump the other week on bittrex.) After the crowdsale, they appeared to be really rather disorganised. They took the $30m, sent everybody receipts, and then... nothing. No newsletters, no mails, no nothing, aside from occasional promises on reddit from one of the founders that things would improve. But they never really did. Couldn't even be bothered to wipe their noses and put their pants back on for long enough to click 'send' on a form email ;) Top marks for actually having a product, though...

BTC and ETH have appreciated significantly since the crowdsale finished, so heaven knows how much that $30m is worth now, and/or what they're going to do with it...

What bugs me the most about the whole ICO game is that it casts a long dark shadow about the whole blockchain stuff. Storj as well as other ideas in the realm are based on great ideas and I'd love to see them succeed. However, that Storj was able to obtain $30M in funding is really just awful. Sooner or later the cryptocoin market will crash and when this day comes and those projects still have no inherent value a lot of people will be pissed and leave the whole blockchain thing.

I invested in Storj, and I also put some resources behind farming. First, there are lots of issues, but that's to be expected with an early product. At least there is a product. Second, this is a very different project than Dropbox, a centralized cloud storage originally built on AWS. Comparing the two is exactly fair, and the long term value proposition of Story is very different from Dropbox. Third, They are building the platform, and as it improves, the clients will get better, simpler, and a community will form; give it more time. Fourth, Dropbox was founded in 2007; it took substantial time to get the front end polished and working well. This product is more about the AWS side of the Dropbox equation than the Dropbox side.

Finally, some results on my Storj farming. Not super easy to use, not super efficient, but working, and I expect it will get better. I have farmed about 15TB of data (storage) since the ICO. Its growing too; so there are real paying users using it.

Ive been looking for more 'technical skeptic' posts about cash-grab cryptos, so this author better post more articles like this.

Got tired of hearing about some new amazing, world changing crypto coming out and then spending hours researching only to end up with a bad taste in my mouth. Both for the community and the tech itself.

I understand the appeal of ICOs and cryptocurrencies and distributed storage that is automatically shared, decentralized, etc etc. But a part of me says "is this complexity necessary?". The appeal of a service like Dropbox is that it is simple. Most engineer friends I had at the time when services like Box and Dropbox started popping up, laughed at the idea... "It is just a fancy ftp". Now they all use these services in one way or another.

Simplicity is often an important overlooked killer feature.

And this is exactly the polar oppossite of that!

A friend of mine that has been doing ICOs behind the scenes (there is an entire methodology that makes them successful that include some questionable marketing techniques) has been pushing me to do an ICO. But one of the requirements is that the service you provide is directly tied to the currency itself. This file storage service is a perfect example of that. A cryptocurrency looking for a service that works with a paradigm vs a service that just centers on being useful for customers. I am just not interested in doing that...

I can't deny that this has been useful and profitable for him. Here has been doing successful multimilliondollar ICOs where he charges 100k a pop. To me, it just seems like such a scam to take advantage of this bubble, that I'd rather just build a healthy successful business "the old way"

As far as I can see their idea is to incentivise people to "rent out" hard drive space to the Storj network, and get StorjTokens in return, although it's not quite clear from the website what you can do with these coins or what value they hold.

I think it's interesting, I guess protocols like BitTorrent don't work for personal files as you need seeders to be available at all times....I'm half inclined to think Storj might suffer from the same issue if every node that is storing replicas of your blocks goes away - or will the financial "incentive" be enough for people to keep their nodes alive.

I'm still curious as to how is this profitable to whoever created it. There is a shit ton of code in this project - this same amount of code written for an employer or commercial project would net the author quite a bit of money - so what's the benefit creating this abomination (that doesn't even work) versus just working on something actually useful?

What's the likelihood that ISPs will levy higher rates on bandwidth for things like Storj and Filecoin? This 'next-generation' model of enabling people to profit off of cheap/unlimited internet packages seems like a juicy target.

There was a $30MM ICO which ended May 25th. The price of ETH was $177 at the time. Not bad! Dropbox first seed round was only $15,000.

I don't get it. Don't understand why people want to get a lot of money, put a ton of pressure on themselves, and then put out a shoddy half-baked, half-finished product. You see this with almost all software now but the cryptocurrency craze has made it endemic. People who have shown nothing are getting 30 million dollars in funding. Shouldn't you earn it first? Isn't that healthy?

You guys don't need $30 million dollars to do stuff like this. You're lying to yourself if believe you do. This is $30k effort. How about putting your head down, shutting your mouth, building something great, and then showing the world? Is that crazy? You can only assume these people are more about $$$ than what they are building.

While I don't feel that there is risk of brain drain because of the reasons listed in the article, I do believe there is that risk. To be honest, even though many European countries don't pay as much as the US for the same job, I found much more comfort around the ideas pushed in those countries.

The culture I found in Sweden for instance is one that is much better for the employees than the fast-paced top-gun style of programming expected in the US. Sure my peak will probably be 60k, but that 60k will also come with a much better social quality of life. That's the problem with the US that would risk brain drain in my opinion. Money attracts people, but treating them well will keep them and the US is starting to fall away from treating its workers well.

Edit: just think about this. Most Americans talk about how they have so much work to do they don't get vacation. Well in Europe you usually get your 2 weeks and in many countries you get 4 contiguous weeks without question. Couple that with better healthcare options, and yeah the US is not looking so great.

He got a good offer from his home country. Isn't that a good thing that he gets to go home to his country and contribute there? he fact they can spare $250k for him to buy a house and $1M in grant money says a lot about China.

Having been in academia for a bit, it seemed there was an oversupply of PhD in some fields. There are just not going to be enough university teaching positions and not enough Googles or Teslas or other companies needing that many employees with such advanced degrees.

He mentions it's Trump's fault. Let's criticize Trump but not sure if attaching it to this particular case makes it productive. I think they meant this H1-B overhaul

They've claimed that the person in the article is going back home because he received no teaching offers. That's standard practice for foreign individuals on a visa in the US. You have to be doing something to stay here. We're not going to let you live here just because you felt like you wanted to.

---

Other than the politicking going on in the article: "OMG TRUMP IS SO BAD AMIRITE?!" (Paris accord disagreement, etc) What I think up for debate is:

Should foreign contribution in the economy on a national level be considered fair and healthy for our economy?

My concern is due to depressed wages competing on non-equal levels causes more long term harm to the consumer and the businesses involved.

Because the article just offers a single example, and seems to center on the lack of available tenure track positions, I'm going to take a more holistic view that gets at the major issue: is current research funding at the appropriate level?

What would happen if we increased research funding by X percent? How did we settle on the current funding levels? I would be curious to see a reasonable source for this. A cursory google search mostly returned opinion pieces that we should increase funding for science. I agree, but hard(er) numbers would be better. It would be great to see a back-of-the-envelope ROI for X percent funding increase in T time. Obviously funding can be applied in many ways, and the ROI is difficult to measure, but someone must have studied it.

For the immediate future, the US remains the best place for research. But dominance can begin to change before the effects become obvious, like a large company that's still profitable long after it's become irrelevant.

> U.S. universities take about half of research grants as fixed overhead, sapping up funding before it reaches a scientist's hands. In China, overhead is closer to 10%, allowing more staff hiring and equipment purchases, Li said

the real issue, unanswered by the article, is why didn't linsen li get job offers here? because the premise that we want to keep all the smart people here to generate more economic value for "us" rather than "them" is hard to disfavor.

1. was he deficient in some way that makes him unsuitable for the job?

2. did the university not provide the needed skills for the teaching positions he was applying for?

3. is the supply of qualified applicants so high that many receive no offers?

4. is there just not enough funding to employ every qualified academic?

5. is the regulatory environment such that we force many qualified jobseekers to look elsewhere?

...and so on. these seem to be the more pertinent questions in this case, not the political "trump is xenophobic and his policy sucks" slant of the article (whether you agree with that sentiment or not), because the uncertainty around imiigration policy doesn't seem to have had direct bearing in this situation. li didn't get a job in the US, so he's going to china, where he did get a job. pretty straightforward.

with that said, i believe we should allow much more immigration, not less (contrary to trump's position), but the light and disjoint reasoning in this article was a head scratcher.

For better or worse, a number of countries are going to be churning out far more people with PhDs than can fill their academic or research positions. That will probably include China, given the trend of education there.

I'm leaving after 3 years working for SV start ups. It's time for me to do my own thing, and there's just no viable way for me to stay here during the "figuring it out phase". Spain, France and the Netherlands have attractive entrepreneur visas, or I could easily go home to Australia, to New Zealand or Canada. I will miss living near Stanford and the people around here, but there are other top universities to go camp out next to.

There was similar discussion on proberts AMA, there's simply no easy way to hang out here to work on cool side projects. The closest thing is the E2 visa, which would require all of my savings.

Besides job offer salaries, we usually don't quantify other aspects such as social quality, family, housing conditions, etc... It is actually very hard or impossible in some cases to measure. US has definitely been winning with the salary advantage, but it is lacking the rest compared to many places. This would be the main reason that talents would eventually go away. There's no war, life conditions have been improved in many parts of the world, why not those places.

250K for a house. How exactly do you expect any country, let alone the US, to compete with that kind of offer.

Sure you can expand research grant money, as the article mentions, but what more than that? A guaranteed job? A blank check? The author should have spelled out some realistic reforms.

I'd hardly call 140+ out of 3000+ applicant returnees a brain drain. Sounds like they don't offer this to foreigners either. Someone forgot to tell them how politically incorrect that is. Plenty of smart people from other countries would love to take their place.

And so what if they go to Canada, our friendly northern neighbor, ally, partner in this hemisphere, and subscriber to our intellectual property laws. I don't necessarily consider that a "drain", more like just living in "alternative version USA". Not like they can't easily return to the US if a better opportunity came up and or the political climate shifted more to their liking.

I have spent many years living and working in countries other than the USA-- and while I found all of them had one advantage or another over the USA, the cultural differences were enough to make me come back to the USA.

Alas, at the way things are going with regulations, particularly in tech, and the increased... polarization and radicalization of politics here, I'm starting to think that it's time to leave again.

And here's the thing-- I bought a car on a 5 year note and it's not even paid off yet.

Pretty soon, any kind of innovative work in the crypto space (Eg: trust-less atomic swaps between blockchains for example) is effectively illegal, UNLESS you can raise $100M to hire enough lawyers to prove you never have custody of the coins.

Much cheaper to move to a friendly jurisdiction, raise the same $100M and put it into engineering salaries.

Are the people who applied for research positions and got the job staying? I guess so. So, those who didn't get the job weren't the "brains" that the US needed? And apparently, China is paying big compensations for him because these are the "brains" that China desperately needs. It seems like everyone is happy.

risking? it's been happing for years already. both in the "drain" sense, those leaving the US, and with fewer "brains" coming to the US from other places. also, with the decline in US academics, fewer are being created in the US to begin with. altogether, three different factors making a "brain drain".

My father was a postdoc in the 90s, having gotten his master's and Ph.D. here in the states. Mother has a Ph.D. as well. My family and Taiwanese, and I knew about this issue growing up. This doesn't just affect our family, it also affects the other Taiwanese families I am aware of.

The difference was that, back then, if we were to go back to Taiwan, there wasn't a guarantee that a US-trained Ph.D. would get any sort of employment.

It is similar, yet different with China: in the past 5 years, I have seen China, flushed with cash (from, you know, Americans) aggressively investing. Infrastructure investment is the most obvious -- for example, ambitious rail plans that not only stretch across China, but also into Europe and Africa. There is also an aggressive investment in cultural influence as well. (For example, trying to get Wushu recognized as an official Olympic sport).

And then there is advanced research. There are two teams in the world that researches quantum tunneling communication, and one of them is a Chinese team who was a protege of the PI of the only other team trying to pull it off. There is massive investment into AI/ML, into space, into military (both a carrier program as well as asymmetric military technologies such as drone carriers).

A postdoc in America doesn't really get much respect, not in the way the Chinese and Taiwanese do. You could struggle here in America, writing research grants, and your family doesn't know or doesn't even care. Or you could be offered being a director of a lab with a lot of funding. Family members and neighbors might not know the science behind what you do, yet they admire and respect you.

I remember this documentary about the creation of the Three Gorges Dam and the displacement of some families along that river. They showed a girl in a family who was sent off to work on the cruise ships for tourists who wanted to see the Three Gorges before it got flooded. The family needed money so they sent their daughter off -- a very Chinese things to do. The girl didn't want to work and the mother is telling her, sorry, this is for the family. You know what dream the girl was giving up? It wasn't an entertainer, or an entrepreneur. She wanted to become a scientist. She saw scientists as the heroes.

That's something I never hear in America.

I think a lot of people in the West forget that (despite the Cultural Revolution), the Ming dynasty Chinese at its ascendent had more reach, technological prowess, and industry of any other society ... and they deliberately turned inward: completed the Great Wall of China, recalled the exploration fleets, shut down their massive network of steel foundries, just before the time the Europeans were starting to explore and to colonize. China has had many cycles of empire expansions, consolidations, and fragmentations.

That's funny. Everyone I know in academia, from CS to Philosophy to Biology, is complaining there is a glut of "academics". They had PhDs but they can't positions in labs or get assistant professors to get on track for tenure because there are so many PhDs in academia.

Maybe a brain drain is something the US needs. An oversupply of "brains" isn't a good thing. We need a balance.

My understanding is that black hole mergers are not expected to have any optical-wavelength emissions, whereas neutron star mergers should have emissions across the electromagnetic spectrum. Is that distinction part of the excitement here?

While I would have a trouble imagining a quantum of the space-time curvature (the graviton), it is not hard to see that the changes in the curvature could propagate in the form of waves. So, while yes, an experimental discovery of these waves is an important event in history of science, I am left curious as to whether it adds anything to our present understanding of Nature...

"We are working hard to assure that the candidates are valid gravitational-wave events, and it will require time to establish the level of confidence needed to bring any results to the scientific community and the greater public. We will let you know as soon we have information ready to share."

Normally, the scientific community is pretty careful about not revealing results before they're fully baked (press notwithstanding). Seeing how that control has broken down for this incident, is it correct to infer that astronomers have pretty much lost their minds over the possibility of capturing a neutron star merger?