Interesting post, and I’m sure “not having thought of it” helps explain the recency of vehicular attacks (though see the comment from /​r/​CronoDAS questioning the premise that they are as recent as they may seem).

Another factor: Other attractive methods, previously easy, are now harder—lowering the opportunity cost of a vehicular attack. For example, increased surveillance has made carefully coordinated attacks harder. And perhaps stricter regulations have made it harder to obtain bomb-making materials or disease agents.

This also helps to explain apparent geographical distribution of vehicle attacks: more common in Europe and Canada than the United States, especially per capita. Alternative ways to kill many people, like with a gun, are much easier in the US.

Yet another explanation: Perhaps terrorist behavior doesn’t appear to maximize damage or terror is that much terrorism is not intended to do so. My favorite piece arguing this is from Gwern:

How much support is there for promotion of prediction markets? I see three levels:

1. Legalization of real-money markets (they are legal in some places, but their illegality or legal dubiousness in the US—combined with the centrality of the US companies in global finance—makes it hard to run a big one without US permission)

2. Subsidies for real-money markets in policy-relevant issues, as advocated by Robin Hanson

3. Use of prediction markets to determine policy (futarchy), as envisioned by Robin Hanson

1. We want public policy that’s backed up by empiric evidence. We want a government that runs controlled trials to find out what policies work.

This seems either empty (because no policy has zero empirical backing), throttling (because you can’t possibly have an adequate controlled trial on every proposal), or pointless (because most political disputes are not good-faith disagreements over empirical support).

Second, as this list seems specific to one country, I wonder how rationalists who don’t follow its politics can inform this consensus.

Third, did you choose eight demands only to mimic the Fabians? Does that mean you omitted some other plausible demands, or that you stretched a few that perhaps should not have made the cut?

Useful distinction: “rationalist” vs. “rational person.” By the former I mean someone who deliberately strives to be the latter. By the latter I mean someone who wins systematically in their life.

It’s possible that rationalists tend to be geeks, especially if the most heavily promoted methods for deliberately improving rationality are mathy things like explicit Bayesian reasoning, or if most of the material advocating rationality is heavily dependent on tech metaphors.

Rational people need not fit the stereotypes you’ve listed. Most people I know who seem to be good at living have excellent social skills and are physically fit. Some well-known rationalists, or fellow travelers, also do not fit. An example is Tim Ferriss.

Hey, I just saw this post. I like it. The coin example is a good way to lead in, and the non-quant teacher example is helpful too. But here’s a quibble:

If we follow Bayes’ Theorem, then nothing is just true. Thing are instead only probable because they are backed up by evidence.

The map is not the territory; things are still true or false. Bayes’ theorem doesn’t say anything about the nature of truth itself; whatever your theory of truth, that should not be affected by the acknowledgement of Bayes’ theorem. Rather, it’s our beliefs (or at least the beliefs of an ideal Bayesian agent) that are on a spectrum of confidence.

Artificial intelligence (AI) is useful for optimally controlling an existing system, one with clearly understood risks. It excels at pattern matching and control mechanisms. Given enough observations and a strong signal, it can identify deep dynamic structures much more robustly than any human can and is far superior in areas that require the statistical evaluation of large quantities of data. It can do so without human intervention.

We can leave an AI machine in the day-to-day charge of such a system, automatically self-correcting and learning from mistakes and meeting the objectives of its human masters.

This means that risk management and micro-prudential supervision are well suited for AI. The underlying technical issues are clearly defined, as are both the high- and low-level objectives.

However, the very same qualities that make AI so useful for the micro-prudential authorities are also why it could destabilise the financial system and increase systemic risk, as discussed in Danielsson et al. (2017).

Conclusion:

Artificial intelligence is useful in preventing historical failures from repeating and will increasingly take over financial supervision and risk management functions. We get more coherent rules and automatic compliance, all with much lower costs than current arrangements. The main obstacle is political and social, not technological.

From the point of view of financial stability, the opposite conclusion holds.

We may miss out on the most dangerous type of risk-taking. Even worse, AI can make it easier to game the system. There may be no solutions to this, whatever the future trajectory of technology. The computational problem facing an AI engine will always be much higher than that of those who seek to undermine it, not the least because of endogenous complexity.

Meanwhile, the very formality and efficiency of the risk management/​supervisory machine also increases homogeneity in belief and response, further amplifying pro-cyclicality and systemic risk.

The end result of the use of AI for managing financial risk and supervision is likely to be lower volatility but fatter tails; that is, lower day-to-day risk but more systemic risk.