As I think free banking is the proper way of arranging a financial system, then I do not believe that NGDP targeting is superior to it. However it could be argued that free banking approximates NGDP targeting, but this need not be the case if banks monetise just liquid, high turnover assets (real bills doctrine) like they used to in Scotland.

Psychology

A critique of the theory that atheism is caused by mutational load (which goes hand in hand with lower fitness across the board

"Better art taste" is to be read like "intelligence" as in g. There is a scale that measures it, and the proposition is to be read in that way.

Maybe the reason people dislike the idea of IQ is that there is a difference between the popular concept of intelligence and the psychologists' concept. As readers of Nintil, you won't be shocked if I say that "A is less intelligent than B" is a coherent proposition that can be true or false. But similar statements could be made with other properties such as aesthetic properties "A is less beautiful than B". This is generally regarded as the realm of the subjective, but making a scale about it enables one to make statements that sound broader than they actually are.

Last, I meant to write a review of Pinker's last book, but I didn't. Instead, I'll just say that I find myself in agreement with most of what he says, except in the bit around existential risks, where he is not sufficiently Bayesian (But maybe he is, depending on how your read his words. Is he denying the possibility of assigning credences to rare events?) His view of the problems around superintelligent AI are better than his previous takes on the matter, but still a bit silly. As an example, at some point he claims that if mankind is smart enough to build superintelligent AI, we are smart enough to avoid killing ourselves with it, ignoring that our end was a possible outcome of the Cold War. Yes, we survived, but we could have not. Interestingly, he says that if an AI is smart enough to take over the world, it will be smart enough to understand what we mean by "Make me some paperclips" (Make 4 or 5 vs take over the universe to make lots, just in case). Initially, this may seem like another bad take: An AI system composed of several modules can have a deficient input module for interpreting orders and yet be good at getting things done, but here it could be argued that having good priors about the world (knowing what humans mean by "make me a bunch of paperclips") is part of what is needed to take over it. But if he wants to offer an argument against worrying about AI, perhaps a better one is the Talebian argument: the world is just too random. On a coin tossing exercise, a superintelligent AI won't do better than you. Can a superintelligent AI do better than the stock market? Etc. I haven't seen this explored in much depth elsewhere, but if valid, this defuses this issue.

Another problem with Pinker's is data quality and comparability. His book claims that - indirectly - Montreal in the 60s is as liberal as the Middle East in 2005. This is not plausible. It is an artifact of the way that plot is constructed (by extrapolation) and the thing it measures: subjective responses. If one asks "Is being gay a bad thing? Rank your answer in a Likert scale", a 5 in Montreal might mean that one has some vague squirm about them, but otherwise that's it, while a 5 in Iraq may mean that one will stone them on sight.

And another is data quality. However, even if these critiques undermine partially some of the points, the conjunction of all the evidence is mutually reinforcing, and the point still stands strong.