Despite the controversial topic, I think it is interesting to see what one can conclude about a country from freely available information (even though the nmap'ing might have been illegal, I'm not sure about laws regarding nmap anymore).

What are some good books/resources on things like "allocated" and "assigned" IP addresses? i.e. Internet governance, and IP in general? Where is he getting the data like: "inetnum: 175.45.176.0 - 175.45.179.255 ..."?

Also are there tools that take a list of services on ports and map it to likely hardware/OS?

I have been programming for a long time but somehow I missed out on this kind of networking knowledge. Are most people who know this stuff network engineers?

The general population doesn't know or give a shit about the torture report. The educated don't really give a shit beyond shaking their head while reading the report in the Times, or posting a link on their FB saying that it is 'shameful.' Sad, but true.

We've known about these practices for years. The Abu Ghraib scandal was 11 fucking years ago. We've known about waterboarding and Guantanamo for years as well.

All of which is to say, I think if you believe that the U.S. government needs to create a false flag operation to bury the report, you are seriously out of touch with the political reality. Public apathy will bury it for them.

The most enjoyable part of this project for me was implementing the NFLs incredibly elaborate tie-breaking procedures in code: taking a plain English description of the rules (that requires some interpretation) and porting it to JavaScript.

We should be open-sourcing the simulator implementation (the model, but not the UI) soon so you can see how it works.

There was also a fun moment when I realized I could make the simulator 10x faster by computing only the seed of the selected team, rather than the full order of teams in the playoffs. This avoided a lot unnecessary sorting and tie-breaking!

I had the same feeling when watching this film. Instead of giving this portrayal the depth Turing deserved, Cumberbatch instead fell back on his usual typecast genius character with minor tweaks. Very disappointing.

I agree with the author's criticism of the movie, but I still personally enjoyed The Imitation Game.

It's good entertainment even if it is quite exaggerated and not 100% historically accurate. If it exposes more people to some of the history of computing and one of its great early engineers then I think that is positive.

We've been developing app that also streams video content from Mac to iPad, and this recent Apple reversal on USB has been very frustrating for us.

About a year ago we submitted an app to the App Store that uses the same USB tech (PeerTalk), we wanted to see if Apple would allow it (we didn't want to build a business around something that could change on a whim). If it was allowed that would have changed the entire direction of our development. An Apple rep called us and informed as that USB access is not allowed.

With this in mind we went ahead with using WiFi, which has been HARD and time consuming. Now Apple all of a sudden allows USB access, what?!

How does Apple expect people to build serious apps when the ground is shifting beneath our feet?

It's always possible that they have custom contracts with the creators of these open source projects; CocoaSplit could have offered them a custom license without x264 so that part can be dial-licensed.

Dean (the author of the article) mentioned that several other apps have attempted to use peertalk and other OSS code this app uses but had their app rejected by Apple (e.g. see mronge's comment). So perhaps the lack of attribution was made with the assumption that use of these OSS packages was part of the reason for rejection. If this is not the excuse then I can't imagine any reason for it. Attribution in this form is a simple footnote somewhere and giving credit to others never hurts your own brand. In fact I'd argue it contributes to it.

I paid for it and have had very little luck in getting it to work properly. I've mailed their support and heard nothing back. I would personally advise people away from giving them money, especially now I know of their lack of crediting open source projects.

What actual benefit is derived by an open source project when they are credited somewhere deep in an about page? Is tangible value provided there?

I feel that if I had produced an open source library, I'd be less interested in having my name anywhere near a project that I didn't actually produce. But I have little practical experience with such things.

- CocoaSplit's licensing isn't anywhere BUT in the credits.rtf which is buried. It's not in any of the source headers, not in the README.md. If you aren't using the app part of CocoaSplit, you would never know it was there unless you grep'd. And who amongst us has grepped for a license?

- PeerTalk and GPUImage are permissive licenses that only require attribution

The key assumption here was the soldiers considered themselves "bitter foes". I'm not sure this was the case. It required "court martial" -- ie. throwing soldiers against the wall and shooting them -- to resume the fighting in some cases.

I think that anyone who lived for more than a couple of weeks on the Western front came to the realization that the people engaged in the fight had more in common than not. The officers and rear echelon folks shot thousands of their fellow countrymen who cracked under the pressure of constant bombardment and death.

So as a soldier or junior officer, you faced certain death both in front of you and behind you. Survival meant huddling for warmth in a fetid hole. Those poor bastards were cogs in a murder machine -- the only "golden ticket" was losing a limb.

The informal truces of the First World War amaze me whenever I read about it. I have similar feelings when I read about the widespread dissent by US troops during the Vietnam War, ranging from Search And Avoid missions, to the "flattop revolts" and suspected sabotage of Navy ships.

The bitterest foes for the troops don't seem to be the enemy troops, but the politicians and upper staff, who offer the certainty of punishment and slander of dishonor, while the enemy troops offer only the chance of death or injury.

If true, I begin to understand why John Kiriakou, the only CIA agent jailed because the US decided to torture people, is a whistleblower and not one of the government torturers, including those that authorized the program. That sort of power structure requires that the authorities be able to be worse to its own people than what the enemy will do.

This statistical test for causation (X->Y) is based on the idea that X and Y each contain noise - noise present in X flows causally to Y but noise present in Y won't flow back to X.

But, even if true, it isn't clear that this makes for a good test. For example, it's plausible that Y could have a damping effect and remove noise, which would reverse the results of the test.

"They say the additive noise model is up to 80 per cent accurate in correctly determining cause-and-effect." This has been exaggerated by Medium from "accuracies between 65% and 80%" in the original article.

But a coin-flip model should be 50% accurate. 65% accuracy is unconvincing. The journal article's conclusion admits that their results are not statistically significant in any sense. As such, the results do not even meet the weakest possible scientific standard. They couldn't reproduce earlier published results in this field (typical of publication bias).

Their final paragraph concludes that there is surely a method of doing this, but they just haven't found that method here.

In econometrics this approach is called "identification thorough functional form" because it relies on assumptions about the exact distribution of some is the variables.

The main problem is that it requires making assumptions that are very hard or impossible to test. Nonetheless it's an interesting idea, but I doubt this method can replace randomized trials or instrumental variables except in a tiny fraction is cases

This isn't as generally useful as the title suggests... due to these assumptions:

"that X and Y are dependent (i.e., PXY=PXPY), there is no confounding (common cause of X and Y), no selection bias (common effect of X and Y that is implicitly conditioned on), and no feedback between X and Y (a two-way causal relationship between X and Y)"

>Obviously temperature is one of the causes of the total amount of snow rather than the other way round.

Can someone explain how this is 'obvious'?

How can this be a claimed scientific way to tell cause and effect and then drop a sentence like that in the middle of the explanation?

Even if you accept that it's true that temperature determines snowfall, it seems there is likely some feedback loop in there. The fallen snow doesn't just disappear, wouldn't it affect later measured temperatures? Remove a bunch of (cold) snow from an area and the average temperature of the area should increase faster than if you had left the snow, no?

"The key assumption is that the pattern of noise in the cause will be different to the pattern of noise in the effect. Thats because any noise in X can have an influence on Y but not vice versa."[...] "Thats a fascinating outcome. It means that statisticians have good reason to question the received wisdom that it is impossible to determine cause and effect from observational data alone." https://medium.com/the-physics-arxiv-blog/cause-and-effect-t...

Reminds me of http://www.pnas.org/content/104/16/6533.full - interesting, but probably only applicable to very simple systems. If you have various complex interconnections between components, simple A -> B reasoning is not helpful.

It's like all statistical tests -- it works really well (provably well) when the assumptions it requires hold. However, it's usually impossible to know if those assumptions hold without holding the desired answer in the first place. That's why nonparametric tests are so popular (not saying they have much to do with the article at hand, but people are definitely willing to get less definitive results in exchange for making fewer assumptions).

We changed the URL from http://arxiv.org/pdf/1412.3773v1.pdf because, with some exceptions (such as computing), HN tends to prefer the highest-quality general-interest article on a topic with the paper linked in comments.

This comes up often enough that it is a good case for linking related URLs together, which is something we intend to work on in the new year.

Ok, so I'm thinking the need for this could become huge. How does one get into this industry as a programmer? What would I need to learn? What kind of diplomas are useful? What should I start practising on?

Interresting that energy wise it is feasible to transport a whole lot of items that are not being used to the picker and back. But I am pretty sure that this has been taken into account when designing the size of the storage pods.Pretty impressive!

This one shows the picking station, where the humans take things out of bins and put them in other bins. A computer-controlled laser pointer points to the item to be taken out, and a light shows where the item goes, and a bar code scanner checks on the human. The job takes about two minutes to learn, and full productivity for new humans is achieved in about half an hour. There is no possibility of promotion. Machines should think. People should work.

Kiva is a huge success. Before Amazon bought them, they had about 20% of online order fulfillment in the US. It's higher now, with Amazon using them. Kiva is so successful because it's so simple to install. All it needs is a big flat floor with some bar code stickers, a supply of cheap shelving units, and the robots. All the robots are small and interchangeable, so they don't have to be repaired on-site and there's no need for expensive on-site technicians and repair shops. So converting to Kiva robots is fast, cheap, and easy.

Automated warehousing isn't new, but it used to be a lot more complicated and far more custom. The older systems involved conveyors, machines that moved on tracks, extensive site-specific and product-specific engineering, and good onsite maintenance. Here's a state of the art version of a classic automated storage and retrieval system in a frozen foods warehouse. (The frozen foods industry has been heavily into warehouse automation for decades, because they work in a sub-freezing environment.) This is impressive, but look at the sheer complexity and number of moving parts involved. All those belts, motors, lifts, and sensors, and all dedicated - if any of that stuff breaks, it has to be fixed, not just bypassed. With Kiva, any dead robots can be pushed out of the way and dealt with later, off-site.

Kiva was started by one of the executives of Webvan. (Remember Webvan - first dot-com boom?) Webvan offered same-day delivery a decade ago. It was popular. It just cost too much to provide that service. If they could only get rid of all those warehouse employees and complex warehouse machinery... Well, they did. Most of them.

But humans are still needed to take things out of one bin and put them into another. For that, there's the Amazon Bin-Picking Challenge:

This is a great idea, I love it. Perhaps it would be a good idea to offer exports in the form of .sql files that can be loaded into any database, a la 'curl api.stripe.com/export... | psql'. I'm sure the export would be bigger than a binary sqlite file but it would remove the dependency on sqlite.

From the banking-industrial-advertising complex's view ( roughly congruent with the mores of the Silent Generation ), this IS cheapness. My parents were Silent Generation and were extremely cheap by those standards. But at that time, the stock of both existing cars and housing was well under demand.

It would appear to me, as a 25 year old college grad, that I and my peers are spending just about every dollar that we are able to make just to get by. That doesn't give us much opportunity to make purchases that require substantial down-payments in addition to financing, like vehicles and property.

Prices have inflated so dramatically since our parents' generation, on virtually everything, that saving up enough money to put 10-20% down on a major purchase, like a home, requires 5-10 years of savings. Then there is the job market uncertainty, where one cannot be assured that it will even be possible to remain employed in the same area more than a couple years down the road. Under such unstable conditions, tying oneself down into a home that it may not be possible to sell when circumstances dictate a change is a losing proposition.

None of this has to be attributed to some inherent "cheapness." I myself am choosing to live in NYC and don't plan to buy either a car or a house anytime soon, but that doesn't make me "cheap" (my Christmas spending would definitely beg to differ).

The impulse is to only speak up when you contradict the story, so here's my evidence: I'm 21, I've been a salaried software engineer since I was 17, I have never owned a car and barely know how to drive (even though I spent two years in Los Angeles). I don't spend all the money I make, but I do spend quite a lot on things I _do_ value. I'm not sure if I am a trend, but I also don't see how that would change any of my decisions, so I guess I don't care?

I wonder how this trend relates to where people live. It seems like not owning a car is only possible if you live in an urban environment. We make it work thanks to good public transportation and ZipCar.

Perhaps people still own cars, but only one to a couple instead of two?

I'd really like to see a "pull out all the stops" benchmark using highly-optimised Asm for the two architectures, as then it's just a matter of how much you can squeeze out of the CPU itself and not something limited by the thick layers of language abstractions on top of that. That would be a nice theoretical maximum to compare against.

Edit: I tested the C++ version on my 5-year-old i7, with an even older compiler (just had to modify the code to not use C++11 features), and with the max optimisation level, it produces a result of 1465ms - which is pretty damn amazing, considering that this is a 16-year-old compiler generating 32-bit code and the most recent CPU it had knowledge of was the Pentium Pro (P6)! I'm convinced that an Asm version could be <1s though, so there's still plenty of room for improvement.

So if it's a "Galaxy S3 with 2GB of ram and a quad-core 1.3ghz processor" then this should be the version with the Exynos SOC. That means that these are cores inside are A9s. The Intel cores are from the Westmere generation.

I wonder how much of the difference we're seeing between various languages is the quality of the code their compilers generate for various backends and how much is due to the different languages benefiting more from architectural differences between the two chips.

I imagine that languages that generate code with more indirection are going to excersize the prefetcher and branch predictor of the core they're running on much more than languages that generate code with simpler control flows. Both the A9 and the Nehalem cores are out of order but the Nehalem has a much, much more sophisticated set of facilities for that. I predict that if you were to re-run the benchmarks on an iPhone 5S you'd see much less of a difference between the various ARM times. And if you were to run it on a cheap Android phone with A7 or A53 cores you'd see a much larger difference.

The new backend is supposed to be considerably faster on floating point code. This code looks integer only, and I don't have relative performance of old/new backend for integer code.

As a wider question: Who cares much about ARMv7? ARMv8 is a completely different beast, requiring a different backend, with much better raw performance (on the same terms as x86-64). That's where languages should be concentrating their current efforts.

The Racket and Lisp comments are a bit odd. To the best of my knowledge Typed Racket does support gradual typing, and as well, comparing it's compatibility to Scheme is a category error: Racket is not Scheme anymore, that's why it's called Racket now and not PLT Scheme.

This is a comparison of specific implementations of the two processor architectures. The benchmarking is still an interesting work, but it isn't a straight comparison of language performance across architectures. It might be a more enlightening result to know the number of opcodes each run executed.

I'm not really sure what this data means because amd64 and ARMv7 are ISAs. For instance, you could make a very deep and superscalar ARMv7 chip that blows a typical amd64 out of the water if you sacrifice size and power. Is the intent simply to show that some language backends are not optimized? Otherwise, without something like "These two chips and clock-for-clock or watt-for-watt it looks like this" it seems meaningless.

The LuaJIT results don't surprise me. I've always been impressed with LuaJIT. The OpenJDK results also don't surprise me. If you work in a Java shop, you learn very quickly to throw out OpenJDK in favor of Sun/Oracle Java. OpenJDK is indeed a "steaming pile of crap".

However, I would have liked to see Julia and Javascript benchmarks in those results. I've heard great things about Julia, and knowing just how incredibly far we've brought the Javascript VMs over the past decade, it wouldn't surprise me to see Javascript fairly high on the list.

Reading this pre-PEP, I'm not sure I understand what I will gain from this as a Python programmer. Can anyone explain what we can expect both short-term (first release, Python 3.5) and long-term (later version, accompanied by tooling and more side work)?

That post cheered me up for the day. Things seem to finally move to a (IMHO) great direction for this language. I was thinking about rewriting my latest large project from python to go, but now i think i'll wait a bit to see where this thing leads to.

Ten years ago, this could have accelerated the development of fast CPython replacements. With the incompatible changes in Python 3, all the alternatives to CPython took years to catch up. Or, like Microsoft's Iron Python, were abandoned. Now that PyPy is getting close to the point where it might replace CPython, a major change to the language comes out of nowhere.

This very much sounds like a type inference with hinting version of the C type system with a minor extension. It also seems to have many parallels to the Objective C type system with id, but again in a type inference variant.

The three rules to apply to quite directly to standard OO C:

1. If t1 is a subclass of t2, t1 is also consistent with t2. (But not the other way around.)

Using the standard OO C method of subclassing structs, ie.

struct t2 {...};struct t1 {struct t2 t2; ...};

This is obviously true and the normal subclass relationship, though the C syntax to use this is a bit awkward:

t2_method(&t1->t2, arg1, arg2);

2. Any is consistent with every type. (But Any is not a subclass of every type.)

In C the Any type is void. Using the class definitions above this is entirely valid and produces no warnings:

void v;

struct t1 = v;

So if you have an object of void type you can use it wherever you might require a stricter type. If you pass in something which is not consistent you'll get a runtime error (usually a segfault in the case of C).

3. Every type is a subclass of Any. (Which also makes every type consistent with Any, via rule 1.)

This just says that you can do the following without getting any warnings:struct t1 t1;void v = t1;

And this works quite well in C.

The extension to the C type system, beyond the type inference, is to make these rules recursive, especially in function types. For example, this produces a warning under GCC, though it will compile and run fine:

void f(int* func(int b, int c)) {}

void* g(void b, void c) {return b;}

f(g);

That might just be a limitation of GCC's type checking though.

I think this is quite a good direction to take. C's type system has proven sufficiently powerful over the decades to build large systems and at the same time is trivially bypassed when you paint yourself into a typed corner or you want flexibility strict static type checking finds cumbersome to provide.

I may be mistaken, but wasn't GVR fairly opposed to anything beyond simple type annotations? Don't get me wrong, I love that he might be turning around on this, but has anyone followed the transition and can provide some context? What led to this turnaround?

Is there any documented information on how the decision to use consistency instead of subclassing was made? Naively it seems very similar to say c# or java with Any instead of Object at the root - except that Any is consistent with all types.

That seems odd and I'm not sure I understand the benefits of it. I get that it only applies to things typed as Any (and presumably like with Object, typing as Any is quite rare to want to do - especially with support for union types) but is there an example where you'd want this and the c#/java subclassing would be limiting?

This is exciting - ever since reading Jeremy Siek's post on Gradual Typing [1], I've been missing this in my python and ruby. This could be the killer feature that would both move larger corporations towards python 3!

Two things in the pragmatic side seem hairy though - type declarations in types and `Undefined`.

That's a good read. Some of those problems are known from quantitative finance, where trying to extract statistical signals from data has been going on for decades. After much effort, all the easy wins (easy to find correlations) have been found and are no longer easy wins, because too many players have found them.

Some of the other problems listed are new, coming from taking what used to be research techniques and putting them into production programs. Those are more like ordinary big system problems, such as configuration management. The article points out, though, that your huge training set is now part of your configuration.

Then there's the problem of systems assigning excessive significance to features which happen to work but are really irrelevant. Those image recognition demos on HN last week illustrated this problem. At least there, though, there's a way to visualize what's happening. For many ML systems, humans have no clue what the system is actually recognizing. If your ML algorithm has locked onto the wrong features, it can become drastically wrong due to a minor change in the data. I saw an example of this in a web spam filter that was doing fairly well, working on 4-character sequences taken from web pages. It was actually recognizing specific code text used by one ad service. The page content seen by humans was totally irrelevant.

This paper's conclusion (I only read the abstract) jives with my experience. Whenever I've tried to make something "intelligent" it always ends up in headaches. Now-a-days when building code, I try to stay away from having the computer make decisions. I've found its much easier to build stuff in a way that put a real human in charge of making all decisions.

Side note: I love the analogy of a credit card to describe technical debt and its one I've used with clients before and they really respond to it.

People (in the UK at least), understand that the interest on a credit card can kill you, while a regular loan assumes that you're paying off the principal month to month. Posing the question, are you paying off that technical debt month to month or is it just sitting there really gets people thinking.

To me, this is a big part of what makes machine learning exciting: it's so challenging to implement it well. The result of it is that machine learning touches a lot of computer science, from high-level languages and formal verification to low-level languages and systems concerns (GPU programming, operating systems).

This difficulty is also a reason why machine learning programmers who are, at least, validated tend to get a lot of trust from the business that CommodityScrumDrones don't get (and that's why most good programmers want to redefine themselves as "data scientists"; it's the promise of autonomy and interesting work). No one tells a machine learning engineer to "go in to the backlog and complete 7 Scrum tickets by the end of the sprint". Of course, the downside of all this is that true machine learning positions (which are R&D heavy) are rare, and there are a lot more so-called "data scientists" who spend most of their time benchmarking off-the-shelf products without the freedom to get insight into how they work.

I actually think that the latter approach is more fragile, even if it seems to be the low-risk option (and that's why mediocre tech managers like it). When your development process is glue-heavy, the bulk of your people will never have or take the time to understand what's going on, and even though operational interruptions in the software will be rarer, getting the wrong answer (because of misinterpretation of the systems) will be more common. Of course, sometimes using the off-the-shelf solution is the absolute right answer, especially for non-core work (e.g. full-text search for an app that doesn't need to innovate in search, but just needs the search function to work) but if your environment only allows programmers to play the glue game, you're going to have a gradual loss of talent, insight into the problem and how the systems work, and interest in the outcomes. Reducing employee autonomy is, in truth, the worst kind of technical debt because it drains not only the software but the people who'll have to work with it.

At any rate, I'd say that while this seems to be a problem associated with machine learning, it's just an issue surrounding complex functionality in general. Machine learning, quite often, is something we do to avoid an unmaintainable hand-written program. A "black box" image classifier, even though we can only reason about it empirically (i.e. throw inputs at it and see what comes out) is going to be, at the least, more trustworthy than a hand-written program that has evolved over a decade and had hundreds of special cases, coming from aged business requirements that no longer apply and programmers from a wide spectrum of ability, written in to it. All in all, I'd say that ML reduces total technical debt; it's just that it allows us to reach higher levels of complexity in functionality, and to get to places where even small amounts of technical debt can cause major pain.

The cynic in me would probably ascribe actions to tax Chinese solar panels as small steps towards banning and slowing the adaption of clean energy generation.

Because the alternative is just too stupid to consider. Making solar panels as they exist today is 1) a commodity business and 2) very, very, very simple. Surely no one would be stupid enough to believe that US manufacturing could be cost-competitive with China in manufacturing a product where the sole complexity lies in sourcing high purity silicon? There aren't even any manufacturing jobs involved here, it's probably 100% automated.

Some argue that China is subsidizing chinese solar panel manufacturers and thereby distorting the market. But then that's not even necessary, as explained nobody would be insanse enough to try and rival them by manufacturing a commodity in the US when you would have to ship the silicon over from China first.

So the effect chinese solar subsidies are having is making solar panels extremely cheap for Americans, transforming the way the US generates energy. They're financing our switch to clean energy.

What?? Aren't we supposed to be committed to reducing carbon emissions? Why is the government forcing me to pay more to do the right thing for the environment? If someone is willing to give us solar panels for super cheap why are we upset?

Many of the Chinese solar manufacturers are again making a profit. I find evidence that people are intentionally making panels below cost to be dubious. There was a war for market share and a lot of companies like Suntech Power went bankrupt. Seems like the market is working.

I can't help but think this is a classic bootleggers and baptists situation [0]. Traditional energy producers (gas, coal, etc) want to keep the price of solar energy high. Domestic and other nation's solar energy manufacturers want their competition's price to be high as well. Both groups win with high tariffs on Chinese solar panels. However, the average citizen loses.

None of this would matter if the US Solar installers would stop price gouging the customer. I just got 4KW installed of Canadian Solar panels. They come with an actual 25 year insurance bond to cover the warranty. I'd like to see China try that. The installation was literally half what SolarCity quoted me. The little guys make a nice profit but don't gouge and you get superb quality panels.

This is my biennial letter to reemphasize Berkshires top priority and to get your help on succession planning (yours, not mine!).

The top prioritytrumping everything else, including profitsis that all of us continue to zealously guard Berkshires reputation. We cant be perfect but we can try to be. As Ive said in these memos for more than 25 years: We can afford to lose money even a lot of money. But we cant afford to lose reputation even a shred of reputation. We must continue to measure every act against not only what is legal but also what we would be happy to have written about on the front page of a national newspaper in an article written by an unfriendly but intelligent reporter.

Sometimes your associates will say Everybody else is doing it. This rationale is almost always a bad one if it is the main justification for a business action. It is totally unacceptable when evaluating a moral decision. Whenever somebody offers that phrase as a rationale, in effect they are saying that they cant come up with a good reason. If anyone gives this explanation, tell them to try using it with a reporter or a judge and see how far it gets them.

If you see anything whose propriety or legality causes you to hesitate, be sure to give me a call. However, its very likely that if a given course of action evokes such hesitation, its too close to the line and should be abandoned. Theres plenty of money to be made in the center of the court. If its questionable whether some action is close to the line, just assume it is outside and forget it.

As a corollary, let me know promptly if theres any significant bad news. I can handle bad news but I dont like to deal with it after it has festered for awhile. A reluctance to face up immediately to bad news is what turned a problem at Salomon from one that could have easily been disposed of into one that almost caused the demise of a firm with 8,000 employees.

Somebody is doing something today at Berkshire that you and I would be unhappy about if we knew of it. Thats inevitable: We now employ more than 330,000 people and the chances of that number getting through the day without any bad behavior occurring is nil. But we can have a huge effect in minimizing such activities by jumping on anything immediately when there is the slightest odor of impropriety. Your attitude on such matters, expressed by behavior as well as words, will be the most important factor in how the culture of your business develops. Culture, more than rule books, determines how an organization behaves.

In other respects, talk to me about what is going on as little or as much as you wish. Each of you does a first-class job of running your operation with your own individual style and you dont need me to help. The only items you need to clear with me are any changes in post-retirement benefits, acquisitions, and any unusually large capital expenditures. But I like to read, so send along anything that you think I may find interesting.

I need your help in respect to the question of succession. Im not looking for any of you to retire and I hope you all live to 100. (In Charlies case, 110.) But just in case you dont, please send me a letter or email giving your recommendation as who should take over tomorrow if you should become incapacitated overnight. These letters will be seen by no one but me unless Im no longer CEO, in which case my successor will need the information. Please summarize the strengths and weaknesses of your primary candidate as well as any possible alternates you may wish to include. Most of you have participated in this exercise in the past and others have offered your ideas verbally. However, its important to me to get a periodic update, and now that we have added so many businesses, I need to have your thoughts in writing rather than trying to carry them around in my memory. Of course, there are a few operations that are run by two or more of you such as the Blumkins, the Merschmans, the pair at Applied Underwriters, etc. and in these cases, just forget about this item. Your note can be short, informal,handwritten, etc. Just mark it Personal for Warren.

Thanks for your help on all of this. And thanks for the way you run your businesses. You make my job easy.

WEB/db

P.S. Another minor request: Please turn down all proposals for me to speak, make contributions, intercede with the Gates Foundation, etc. Sometimes these requests for you to act as intermediary will be accompanied by It cant hurt to ask. It will be easier for both of us if you just say no. As an added favor, dont suggest that they instead write or call me. Multiply 80 or so businesses by the periodic I think hell be interested in this one and you can understand why it is better to say no firmly and immediately.

And then there's the tldr version that everyone seems to be actually using:

1. Send at least one mail per day to urge your user to try out a random feature that he doesn't care about.

2. Make sure every mail claims to be "not a bot", "not automatic", "I'm a real human!!1".

3. Close every mail with an offer to be available at any time for everything and anything.Make sure your user knows that he can call your CEO at 5 in the morning if he feels a sudden urge to have a personal product tour.

4. Mention at least two awesome webinars in every e-mail. Send regular reminders about webinars.

5. Also send invitations for every congress, meetup, bbq party, that you are however involved with.

I believe I had mentioned this before - and received a lot of downvotes for it: I, as a user, wish not to be bothered with emails.

I often just briefly want to see what the fuzz is all about. The app then "tricks me" into providing them with an email address, by pretending they need this as an identifier, most of the time in order to create an account. They then feel free to send me "Greg from blahblah app"-Emails.

To me, this is spam. It is an email I do not want. To me there is no difference between a random spammer who wants to sell fake viagra and Greg from blahblah app, who wants me to use his cloud-driven javascript thingy.

I believe there is a role missing, in the view on customer relationships. Just because I am looking at things in your store doesn't mean I want to be treated as a customer already. I believe there should be a differentiation between somebody who already bought something and somebody who is about to. Aggressively sending email at any chance isn't the way to make this transformation, imho.

Whoever is telling web companies that it is a good idea to suddenly start mailing "newsletters" to people that signed up for accounts is lying to them. Over the past year or so I've started getting emails from web sites I haven't visited in years - for example I signed into an old hotmail account and apparently fark is sending out newsletters now?

The only thing these things do is make me click unsubscribe and make a mental note that the site sucks.

This article actually explains how to make your client hate you and your business.

Dear colelagues, please don't use HTML email notifications as far as some technically advanced people just turn HTML off to a) reduse valnurability (i.e. see all links and remove possible frames, img & JS) b) make email processing faster and less resource consumptive

IMHO I see just one common and most important rule of email notifications. it must be simple as possible and not obstructive. And please never force client to register and leave email without real necessity. Minimize it and keep simpe. Then your audience (smart and most referenced group of it, at least) will love you.

Of course much of this has been written about in various blog posts, but having it all compiled in a single resource is very handy, especially when being in the startup phase of a SaaS business selling booking software (https://zapla.co). I'll definitely be implementing a lot of this advice!

Really good advice here. One thing I particularly liked is the advice to email the first 1000 users manually and only set up automated drip campaigns as a way to automate what you find yourself sending over and over.

I'd also like to add this PSA: Don't send automated emails that pretend to be from a human. http://blog.beeminder.com/smarmbot Blog post, "Don't Be a Smarmbot", in which I argue with @patio11 about this.)

Minor nitpick: surely the title should actually be "How to Send Email as a Startup". The current phrasing seems a bit odd - especially considering the fact that it is a lecture on reaching out to potential future customers :

My biggest hinderance is getting a phone number that won't cost a fortune to call people in US or abroad from Canada. some Canadian telecom companies absolutely adore ripping people off when it comes to dialing other countries as if we were in the 1980s.

Basically I want to approach my customers with a call me number that won't cost them a lot and won't cost me a lot to talk on.

I'm still reading through this wonderful guide it is ripe with useful information.

There's no point open-sourcing the crappy old parts of our architecture which we wouldn't advise new projects to use (like our Perl framework. There are so many Perl frameworks these days, and many of them better than ours) - but Overture is high quality stuff. Enjoy!

Overture is impressive in its inclusiveness, from article: "Theres also one-line support for animating views. You declare the layout property and its dependencies, and Overture will handle animating it between the different states. Full support for drag and drop, localisation, keyboard shortcuts, inter-tab communication, routing and more mean you have everything you need to build an awesome app."

Especially having pre-fixed inter-tab comm. sounds enticing for me, haven't heard of other frameworks handling this out of the box!

I just signed up for Fastmail this weekend and the snappyness of their app really impressed me. I emailed Neil and asked him about the framework and he said they would release more info soon happy to see it so quick! I'm assuming the mobile app uses the same in a Webview?

So about two months ago, on about my two-year anniversary with the company, I switched from "Software Engineer" to "Solution Architect" (aka "Biz Dev Engineer"). It means that I now go to a lot of meetings (frequently with clients), where people draw on my knowledge of how our things work, and build a few things on the side. Mostly marketing material.

It's a crazy different experience from actually writing the code, and some of what I've learned and noticed has to do with "just being yourself". Part of that learning process has definitely involved being placed in high-enough stress situations that I was put off internal balance: by gaining familiarity with not being myself, I got a better idea of where myself was and how to stay centered there.

The first realization was that I could label my activities as reactive or proactive. As a software engineer, basically all my time was proactive. Now, most of my time is reactive. But... the original phrase I used to describe the reactive activities, and the way I try to approach them, is "to go be myself at things". The desired outcome is secondary to the experimental juxtaposition.

Another way to put it is, I think: I (like everybody) am a unique snowflake. The point of including me on anything and of sending me hither and yon is to have that uniqueness present and available. Well, part of the point.

It seems to be working. I think (although it's a little early to tell) that I'm more successful at this job than the last. At the least, I'm happier. Your mileage may vary.

As a final note, when applied to personal growth, I think this attitude ends up something like this: Don't aim for results, aim for experiences. Your "higher-self" (whatever that means to you) can't just rewrite your "lower-self" (the thing you're being when you're being yourself), but the former can aim the latter at particular experiences. Go find out what it's like to be yourself at that thing, and you learn a little bit about who you are and who you are changes a little bit. It's an explorative act.

Practical advice fromThe way to approach this concept it to first understand your struggle. Beneath the need to struggle is a fear that the world is an unfriendly place and you are not supported. This view arises from the mind rather from the way, which teaches that the flow, the ever present essence of life, is the way. You can trust that the way will lead you. In truth, the mind-made view of the world, where struggle is necessary, is merely illusion. No matter how real it might appear. Wu Wei is the way.

To follow Wu Wei you must first let go of struggle. Stop fighting with life and trying to make things happen. You are struggling against the flow. You must first realize that you can give this up. Then it is the case that you act, you are not passive - merely waiting for things to happen, but you are no longer opposing the flow of events. Instead, you act, but let go into the uncertainty of life, and you see how life actually occurs. You become open to the mystery of which you are part. In a sense it is total acceptance of yourself and this moment. Of course, it is necessary to practice this. While the way is not of time, and we can be there in an instant, practice connects us to this place over time. Through practice the way reveals itself. Only through practice can this truth be revealed.

e.g.: Water may be directed and controlled by man-made dams, but it will always flow to its destination naturally. To be in accord with that nature, give up making dams for it only delays that flow.

I think we all understand this at a simple, pragmatic level. For example, we all want to become so comfortable with the keyboard that when we think of the variable "bar" our hands automatically type it. I read the point of the article as saying that this applies at higher and higher levels of function as well: if we make things automatic we can spend more time thinking about and accomplishing the higher goals. And as the article mentions flow, again, I think we all know this to some degree.

And I was amused, if not surprised, to see that the confucianist writers had twisted this to support obedience to the power structure. We're lucky Plato had never heard of them!

All narratives about enlightenment contain a form of letting go. Zazen, Koan. Siddartha and his quest etc. I read a book years ago from a French guy who suggested that this could be reached by writing meaningless sentences, which he made a long winding point is more difficult than it seems. I guess one can look at Azimov's A guy like that too.What it says, apparently, is that there is a reality behind all this. The human mind does have the ability to make qualitative jumps.We just need to get rid of the travail first...

Socially, many people are trained to react and fill the gaps created by others. We fill awkward pauses in conversations with chatter. We feel compelled to do work that some other leaves unfinished.

Wu Wei is a practice designed to short circuit this action through the pursuit of conscious non-action. The main effect on everyone else is to evoke a mild state of panic or at least some uncomfortable fidgeting.

You can test this pretty easily the next time you talk to someone by saying nothing. Just leave a gap on purpose. Take no action. The person across from you will feel the pause and will most likely fill it. Sometimes, if the pause is long enough, they will fill it with personal details they would otherwise never share.

One feels drawn to a person like this because their inaction creates a gap we feel compelled to fill. This is the "charismatic" effect the author mentions in the article. In reality, it's more or less a passive aggressive technique to get people to do your work for you through willful negligence.

From what I understand of asian philosophy, I think the dichotomy between he Confucean and Taoist value systems is a very good mental tool - the former stresses academic learning and analysis while the latter strives for simplicity and doing what you can, right now, with the knowledge and tools you have.

To draw a caricature: The confucean systems hold the bureaucracy that keeps systems going for decades in the highest regard while a taoist would value the spontaneity of an "agile and lean" system the most.

I think the "wu wei" concept is linked to a situation where a person has an intutive understanding of a system and it's practical degrees of freedom and constraints, and thus can let his subconscious to perform most of the heavy lifting, versus a situation where for one reason or another the person does not have a lucid mental model of the field where he tries to work and proceeds through constant conscious cognitive evaluation. I might completely off in my understanding, though.

I'm sorry, I do catch the idea, but it really sounds like fluff to me. As with many of these type of articles, I see some discussion and history, but no content or true insight whatsoever. Only some anekdotes and religion-style stories.

I find this whole thing really weird, and I suspect sites like reddit are being manipulated by someone.

Lets the timeline right

1. North Korea makes its disapproval of The Interview public and complains to the UN in the summer of this year

2. Sony is hacked and passwords are leaked. The passwords are the focus of the story

3. A couple of days go by, no mention of North Korea or The Interview

3.5 I've gotta be missing something here

4. Theaters (not Sony directly) decide the pull The Interview because of threats from NK

5. FBI blames NK for sony hack

6. Obama gets involved (?????)

The sequence of events just makes no sense. Then there are sites like reddit that are completely consumed by the story. The number of posts about it is insane, and there is little skepticism about the bizarre sequence of events or the blaming of NK.

An indirect proof of who hacked Sony is easy: depending on whether the 'Hackers' publish the movie or not (given they got access to it and given Sony does not publish it as they say) will show who's behind it.

And in order to "save the face" everyone jumped on that naive story - it is highly sophisticated hack by foreign intelligence, not an "admin" (or "fuck") password on some hotspot or Windows domain. It was a media division, btw, not a "techie" department.

I am exaggerating a bit about passwords, but the idea, I hope, is clear.

Am I mistaken, or did the "hacker group" only mention The Interview AFTER the media proposed the connection? It seems like whoever hacked sony (edit: or somebody else!) just took advantage of an opportunity to cause some chaos. And the whole "FBI confirms NK" thing seems shady. None of this quite adds up.

If it really was North Korea, why would they deny it? Doesn't an act of retaliation require the perpetrator to take credit in order for it to have any benefit to them? Or could it be that North Korea is publicly denying it with the understanding that everyone really knows it's them?

OP here. This thing is confusing the hell out of me. I go back and forth on a daily basis as to whether I think it was the NKoreans or not. One major factor in my head that I haven't heard stated elsewhere is this: If the government says today "It was North Korea" and tomorrow a hacker group says "Lulz! It was us. Gotcha!", that makes the FBI/CIA/NSA look really, really bad. Bad enough that it would outweigh any benefits to blaming the NKors. Why would the feds go out on a limb like that if they weren't absolutely sure?

Would North Korea even have an interest in attacking a company like Sony Pictures in this way? Normally, when a nation-state goes for a cyber attack, they go after useful targets. For example, they go after a government to get an upper hand in negotiations, or maybe they go after industry or academia, to secure knowledge about some helpful technology. Sony pictures would not be a canonical target for a nation-state, because they really don't have much to offer a state like North Korea (it's not like this attack will help the struggling North Korea film industry). In contrast, there would be more for North Korea to lose if the US retaliates.

I can't quite understand the allegation that NK is behind this because I don't see a motive.

I can't wait to see this movie now, not because I think it'll be great but just because of the controversy surrounding it. Especially if Sony doesn't officially release it, I find it extremely ironic that so called hackers are the ones threatening Sony (which is a dubious claim at best) and hackers will likely be the ones to get it "released," considering Sony's lackluster (at best) security.

"While some computer experts still express doubts whether the North was actually behind the attack, American officials said it was similar to what was believed to be a North Korean cyberattack last year on South Korean banks and broadcasters. One key similarity was the fact that the hackers erased data from the computers, something many cyberthieves do not do."

I won't pretend to be an expert on information security but surely this isn't anywhere close to being unique enough to point blame at North Korea?

I've said this elsewhere, but if this whole thing was perpetrated by the North Korean state because of the potential offence to North Korea from showing their leader being killed, why has the scene of their leader being killed been leaked by the hackers supposedly controlled by North Korea and is now posted all over reddit?

Literally every day now I run across something mentioning "Sony hack", but haven't understood yet why it's so significant topic. It seems to gather way more attention than I imagine something like this should. Every now and then somebody gets "hacked", sometimes it's somebody pretty big, it's not that uncommon that some really important data gets leaked, but it never goes further than mentioning it on HN or something, no jokes about it on 9gag, no North Koreas joining investigation. What's the matter?

Maybe it's because I missed original news. Can somebody provide link or explanation why the heck it's so important that even completely non-technical people buzz about it all the time?

Anyone else frustrated that you can't have a conversation about this without the vast majority of threads taking on a conspiratorial tone? I suppose it's human nature for something with such power players as Sony/FBI/NK to seek out hidden motives and what not - but the comment quality really drops off.

1. Does NK even have the capability to pull something like this off? They seemingly fail at every other intimidation stunt they pull off & now they have a massive success out of nowhere? Hm...

2. Why would they deny it if they did it? It's very out of character for them to not pounce on the chance of something being very embarrassing to the US.

3. With all the talk of it being so complicated to pinpoint exactly where the attacks came from, what info is the US gov using to pin this on NK (besides the very easy narrative around the context of the movie). They have to have a bit more intel than they're letting on...or something is fishy here.

The Russians or NK could have secretly hired Chinese hackers to make it look like NK. Now they can embarrass the U.S. for jumping the gun like they did in Iraq and weapons of mass destruction (assuming the connection can not be proved).

Nothing that North Korea says should be treated as though it were legitimate. This is a country that keeps something on the order of a quarter of a million people in actual concentration camps, and tens of millions more utterly brainwashed and in unspeakable poverty. This is now, today, in 2014.

When was the last time most, if not all, of the community here hung out with a guy or gal from the Foreign Service? Or better yet a member of the State Department of the USA?

Or the equivalent in their home country; that's just as well, given that the people I've met all over the world who work in their country's foreign service department are generally good people.

If you haven't seen, "A Beautiful Mind," it's a great film and the math literally helps explain why North Korea, despite evidence, might be a, "Sock puppet," used by...well, let's see.

What country is having a really, really crappy time with economic sanctions right now?

Maybe, just maybe, a bit of experience at interacting with the folks who (gasp) make these kinds of decisions would make the whole situation easier to explain. Or if most of us simply revisit kindergarten in the US, eg, the game of, "Tag." Remember how to claim a cookie that you're not supposed to eat?

Touch it. "If I touch it, I own it," because nobody wants to eat the cookie you touched after you liked your finger, right? So, Russia perhaps, "Licks their finger," tunnels through, and then when we discover the breach, "Look, it's the North Koreans!"

If not them, I'd say Luxembourg is behind it all. We know most American companies that have operations overseas use them to launder (I mean, mitigate) tax burdens in Europe, right?

Having a dev a team there in Belarus I spent 4 months of that past 12 there. This seems to be about pricing.

Because of the strong economic ties between Russia and Belarus, the sanctions in Russia is also causing a huge devaluation on Belarusian currency.

My friends have been telling me of long lines at the banks to convert their money to Dollar. To prevent the rush, starting today, there is a 30% tax on converting your money to Dollar. Essentially a 30% devaluation of their currency.

Since many things are imported in Belarus any currency devaluation effects the price of goods, so people rushed to stores to buy anything they can (and now I've been told the shelves are almost empty).

But here's another issue, prices seem to be control by the government. For instance, when you go to restaurants all menus that have price on them, have a stamp and signature (think of it like a notarized stamp). This is so restaurants wont be able to increase the price as they wish.

From the google translation, it seems to me the online store started increasing their prices to match the devaluated currency and the government didn't like it. Because it contradicts their message on stability of their currency. They might have done the same at a physical store in this case.

There are nine of them (actually ten, but one is just for bridges), so you'd have to disrupt at least five of them to prevent them forming a majority vote on consensus together. Looks like the countries that own the IP address allocations for each dirauth are:

Austria, Germany, Germany, Holland, Holland, Sweden, US, US, US

If the above is all correct, a US<->Germany collaboration - to pick the largest set from two countries - would be one way to cause a large problem.

This is a very big deal if it happens. Roger's linked post on the Tor site talks about "seizure" of directory authority servers; only government authorities would have that power. In the U.S. that would typically happens only after a court grants a seizure order, which would be under seal at this stage.

Of the countries where the servers are located, the U.S. has the most extreme copyright laws, which means, sadly, FedGov is the leading candidate to be behind any possible seizure.

It would be interesting if an enterprising journalist were to ask MIT, SF-based Applied Operations, and RiseUp if they've been contacted by law enforcement on this matter. Those organizations host some of the U.S.-based servers. RiseUp has a warrant canary but it hasn't been updated recently: https://help.riseup.net/en/canary

Of course we don't know what actually is going on and it all may be (I hope!) a false alarm.

PS: If multiple governments cooperate and a majority of servers are taken down, what happens to Tor after the consensus interval expires? I don't know; maybe someone more familiar with Tor does. The consensus interval was changed to 72 hours a few years ago: https://trac.torproject.org/projects/tor/ticket/7986

It's very interesting that the project has some advanced notice about a threat.

My first guess would be that a nation has made some demands of the project that the project won't comply with, and that country has suggested they will seize the directory authority servers located inside it if the demands aren't met soon. [Edit: a new comment by arma on the original story, "To be sure to keep our source safe, we're not providing more details quite yet", makes this seem less likely.]

Or perhaps an insider has leaked some plans to the project.

Along another line of thought, if the US government wanted to further complicate online privacy, I imagine they'd choose a time like now, when headlines about the "cyber intrusions" of 2014 are at a peak. I wonder what other actors could have large enough power over their directory authority servers for the project to post this message.

Edit: Indeed, from a post below by paralelogram [0] and by checking https://atlas.torproject.org , it appears 4 of 9 are in the US. There are also two in Germany, one in the Netherlands (as well as another there that is only for bridge relays), one in Austria, and one in Sweden.

Someone posted a somewhat toxic but somewhat valid point, and the project responded with more details about their 'source'. I add their response but it may be best to read the original post.

To be sure to keep our source safe, we're not providing more details quite yet.

But actually, we don't know many more details than the ones we posted. And as for your 'why', that's an excellent question, and one we've been wrestling with too. There are nine directory authorities, spread around the US and Europe. If they're trying to hunt down particular Tor users, most possible attacks on directory authorities would be unproductive, since those relays don't know anything about what particular Tor users are doing.

Our previous plan had been to sit tight and hope nothing happens. Then we realized that was a silly plan when we could do this one [post the warning] instead.

If there are some seizures of directory authorities or other project infrastructure, this won't be some totally unpredictable occurrence. It was only about a month and a half ago that some relays were seized as part of a general takedown against Tor hidden services. The Tor project posted this blog in response:

That blog post convinced me to shut down my relay. The reason is, to an ambitious prosecutor this blog post looks like:

"We view law enforcement operations as attacks and are looking for ways to defeat them, because we are determined to shield the identities of our criminal clients"

... which is exactly what resulted in the operators of the Silk Roads getting arrested even though they were not personally selling drugs.

The blog post makes casual reference to the "enormous social value" of hidden services and claim they're worried about "secret police repressing dissidents", but doesn't cite any actual examples. Actually I've never heard of a hidden service that has enormous social value - whilst there are a small number of .onion addresses that aren't completely illegal or unethical, for all the examples I know of the operators are not anonymous.

To police forces around the world who keep having investigations hit a dead end because of Tor, going after the project directly will not seem very different than going after services like Liberty Reserve. The people running it are stating publicly that they will do their best to frustrate investigations, and that is dangerously close to admitting participation in a criminal conspiracy. Thin ice doesn't even begin to describe their current situation.

Assuming it's a legal entity that will be performing these seizures, I'm curious to know the case against these servers. To my (albeit somewhat limited) knowledge of the Tor network, these DA's exist solely to maintain the integrity and structure of the network, and to provide a list of known relays to clients.

I also understand that this list of trusted DA's is hardcoded into Tor clients. Since this is the case, I'd be curious how the network could be restored if there is a coordinated action on these servers.

Seems like somebody in the DoJ just decided that Tor's balance between geeky CompSci curiosity and enabler of real-world criminal behavior has tipped too far in the latter direction. The legal case has been ripe for a while-- after all, Megaupload and many other networks have been disabled by the US government for enabling significantly LESS serious criminality. Ummm... world's biggest drug marketplace, anyone??? What's important to remember is that the gov't can't just go in and seize the directory authority servers willy-nilly. Instead, they must do it as part of a legal process against a specific, identified target. In this case, the likely target is going to be the Tor project itself and possibly the individuals leading it. The legal case might ruffle a few techie feathers but only an insignificant portion of the general public will care, and that portion can be mollified with the "stopping the bad horrible criminals" routine.

If this turns out to be (1) real and (2) linked to the Sony fiasco, then North Korea has triumphed. They have taken down two enemies in a single hack: a film and an internet technology. That puts them ahead of the MPAA and the NSA combined.

the most recently restarted dirauths appear to run Tor 0.2.6.1-alpha-dev, including four of the five US-based dirauths (moria1, Faravahar, urras, dizum). gabelmoo, tor26, longclaw, Tonga, and maatuska appear to be running Tor 0.2.5.10. dannenberg is running Tor 0.2.5.9-rc.

The United States Government will fail because even if they were to significantly disrupt the Tor network we'll pull out the Zero Knowledge Proofs on them. We have the crypto and technology to build a super resilient Tor replacement that they cannot do a single thing about. Tor is antiquated and I personally hope they take it out because it's replacement will be 100x better.

If funded, a user governed foundation will be set up to help prevent influence by misaligned interests, such as those seen with existing providers and closed source software vendors. Infrastructure was always meant to be open, transparent and trustworthy.