Another month has passed and here is a new rationality quotes thread. The usual rules are:

Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)

Do not quote yourself.

Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.

As a non-biologist, I kind-of suspect that article is supposed to be some kind of elaborate joke. It sounds convincing to me, but then again, so did Sokal (1996) to non-physicists; my gut feelings' prior probability for that claim is tiny (but probably tinier than rationally warranted; possibly, because it kind-of sounds like a parody of ancient astronaut hypotheses); and I can't find any mention of any mammal inter-order hybrids on Wikipedia.

Sokal's paper brought up the possibility of a morphogenetic field affecting quantum mechanics, which sounds slightly less rigorous than a Discworld joke -- Sir Pratchett can at least get the general aspects of quantum physics correctly. Likewise, Mrs. Jenna Moran's RPGs have more meaningful statements on set theory than Sokal's joking conflation of the axiom of equality and feminist/racial equality. I'd expect a lot of non-physicists would consider it unconvincing, especially if you allow them the answer "this paper makes no sense".

((I'd honestly expect false positives, more than false negatives, when asking average persons to /skeptically/ test papers on quantum mechanics for fraud. Thirty pages of math showing a subatomic particle to be charming has language barrier problems.))

The greater concern here is that the evidence Mr. McCarthy uses to support his assertions is incredibly weak. The vast majority of his list of interspecies hybrids, for example, are either intra-familiae or completely untrustworthy (some are simply appeals to legends or internet hoax, like the cabbit or dog-bear hybrids). The only example of remotely similar variation to a chimpanzee-pig hybrid while being remotely trustworthy is an alleged rabbit-rat cross, but chasing the citation shows that the claimed evidence likely had a different (and at the time of the original experiment, unknown) cause and that the fertilization never occurred. Other cases conflate mating behavior and fertility, by which definition humans should be capable of hybridizing with rubber and glass. The sheer number of untrustworthy citations -- and, more importantly, that they're mixed together with the verifiable and known good ones -- is a huge red flag.

The quote's interesting -- and correct! as anyone who's shown the double-slit experiment can show -- but there's probably better ways to say it and theories to associate it with.

This is a blatant parody. Probability of pig+chimp hybrids involved in human origins are at pascal-low levels.

It sounds convincing to me

This is worthy of notice. It really shouldn't have been remotely convincing..

Can you identify the factors which caused you to give the statements in this article more credibility than you would have given to any random internet source of an unlikely-sounding claim? Information about what went wrong here might be useful from a rationality-increasing perspective.

Can you identify the factors which caused you to give the statements in this article more credibility than you would have given to any random internet source of an unlikely-sounding claim?

Mostly, the fact that I don't know shit about biology, and the writer uses full, grammatical sentences, cites a few references, anticipates possible counterarguments and responds to them, and more generally doesn't show many of the obvious signs of crackpottery.

This is exactly why I (amongst many?) find it so hard to separate the good-stuff from the bad-stuff.
It's the way the matter is brought to you, not the matter itself. Very thoughtful way of bringing it, as Army1987 says, references, anticipation of counterarguments etc.

I realize that if you ask people to account for 'facts,' they usually spend more time finding reasons for them than finding out whether they are true. [...] They skip over the facts but carefully deduce inferences. They normally begin thus: 'How does this come about?' But does it do so? That is what they ought to be asking.

In some species of Anglerfish, the male is much smaller than the female and incapable of feeding independently. To survive he must smell out a female as soon as he hatches. He bites into her releasing an enzime which fuses him to her permanently. He lives off her blood for the rest of his life, providing her with sperm whenever she needs it. Females can have multiple males attached. The morale is simple: males are parasites, women are sluts. Ha! Just kidding! The moral is don't treat actual animal behavior like a fable. Generally speaking, animals have no interest in teaching you anything.

ETA: Explanation: Sometimes the banner at the bottom will contain an actual (randomized) ad, but many of the comics have their own funny mock ad associated. (When I noticed this, I went through all the ones I had already read again, to not miss out on that content.)

(I thought I'd clarify this, because this comment got downvoted - possibly because the downvoter misunderstood it as sarcasm?)

We have this shared concept that there's some baseline level of effort, at
which point you've absolved yourself of finger-pointing for things going
badly. [.... But t]here are exceptional situations where the outcome is
more important than what you feel is reasonable to do.

Ideally, it would be nice if the world can move towards caring about the full outcome over factors like the satisfication of baseline levels of effort in more and more situations, not just exceptional ones.

Personally, a huge breakthrough for me was realizing I could view social situations as information-gathering opportunities (as opposed to pass-fail tests). If something didn't work - that wasn't a fail, it was DATA. If something did work... also data. I could experiment! People's reactions weren't eternal judgments about my worth, but interesting feedback on the approach I had chosen that day.

Caution in applying such a principle seems appropriate. I say this because I've long since lost track of how often I've seen on the Internet, "I lost all respect for X when they said [perfectly correct thing]."

I agree. It strengthens your point to note that, although the quote is normally used seriously, the author intended it mischievously. In context, the "thirteenth stroke" is a defendant, who has successfully rebutted all the charges against him, making the additional claim that "this [is] a free country and a man can do what he likes if he does nobody any harm."

Caution in applying such a principle seems appropriate. I say this because I've long since lost track of how often I've seen on the Internet, "I lost all respect for X when they said [perfectly correct thing]."

I don't lose all respect for X based on one thing they say, but I do increase my respect in them if the controversial or difficult things they say are correct and I conserve expected evidence.

For most people, is it necessarily wrong to lose all respect for someone in response to a true statement? Most people are respecting things other than truth, and the point "anyone respectable would have known not to say that" can remain perfectly valid.

A man who has made up his mind on a given subject twenty-five years ago and continues to hold his political opinions after he has been proved to be wrong is a man of principle; while he who from time to time adapts his opinions to the changing circumstances of life is an opportunist.

"...By the end of August, I was mentally drained, more drained, I think, than I had ever been. The creative potential, the capacity to solve problems, changes in a man in ebbs and flows, and over this he has little control. I had learned to apply a kind of test. I would read my own articles, those I considered the best. If I noticed in them lapses, gaps, if I saw that the thing could have been done better, my experiment was successful. If, however, I found myself reading with admiration, that meant I was in trouble."

Reality is one honey badger. It don’t care. About you, about your thoughts, about your needs, about your beliefs. You can reject reality and substitute your own, but reality will roll on, eventually crushing you even as you refuse to dodge it. The best you can hope for is to play by reality’s rules and use them to your benefit.

Reality is one honey badger. It don’t care. About you, about your thoughts, about your needs, about your beliefs.

Reality cares about your beliefs.

People who don't believe in ego depletion don't get as much ego depleted as people who do believe in it.
People who believe that stress is unhealthy have a higher mortality when they have high stress than people who don't hold that belief.

I would expect that if you have more ego depletion than other people it would result in you being more likely to believe in ego depletion. Similarly, if you're suffering health problems due to stress, it would make you think stress is unhealthy.

Your point still stands. Reality does care about your beliefs when the relevant part of reality is you.

The placebo effect has little relevant effect. People who believe they can fly don't fare better when pushed of cliffs. A world where you believe x is different from a world where you believe not-x, and that has slight physical effects given that we are embodied, but to say 'Reality cares about your beliefs' sounds far to much like a defence of idealism, or the idea that 'everyone has their own truths'.

I'm not sure whether that's true, the last time I investigated that claim I don't found the evidence compelling. Placebo's are also a relatively clumsy way of changing beliefs intentionally.

People who believe they can fly don't fare better when pushed of cliffs.

How do you know? If you pick a height that kills 50% of the people who don't believe that they can fly, I'm not sure that the number of people killed is the same for those who hold that belief. The belief is likely to make people more relaxed when they are pushed over the cliff which is helpful for surviving the experience.

I doubt that you find many people who hold that belief with the same certainity that they believe the sun rises tomorrow. If you don't like idealism, argue based on the beliefs that people actually hold in reality instead of escaping into thought experiments.

A world where you believe x is different from a world where you believe not-x, and that has slight physical effects given that we are embodied,

I would call 20000 death Americans per year for the belief that stress is unhealthy more than a slight physical effect.

'Reality cares about your beliefs' sounds far to much like a defence of idealism

I don't think that the fact that you pattern match it that way speaks against the idea.
I think the original quote comes from a place of Descartes inspired mind-body dualism. We are embodied and the content of our mind has effects.

I doubt that you find many people who hold that belief with the same certainity that they believe the sun rises tomorrow. If you don't like idealism, argue based on the beliefs that people actually hold in reality instead of escaping into thought experiments.

The original quote is taken from an article about the vaccine controversy. People who don't vaccinate because they believe that God will protect them or whatever actually exist, and they may be slightly less likely to fall ill than people who don't vaccinate but don't hold that belief but a lot more likely to fall ill than people who do vaccinate.

My rule has to do with paradigm shifts—yes, I do believe in them. I've been through a few myself. It is useful if you want to be the first on your block to know that the shift has taken place. I formulated the rule in 1974. I was visiting the Stanford Linear Accelerator Center (SLAC) for a weeks to give a couple of seminars on particle physics. The subject was QCD. It doesn't matter what this stands for. The point is that it was a new theory of sub-nuclear particles and it was absolutely clear that it was the right theory. There was no critical experiment but the place was littered with smoking guns. Anyway, at the end of my first lecture I took a poll of the audience. "What probability would you assign to the proposition 'QCD is the right theory of hadrons.'?" My socks were knocked off by the answers. They ranged from .01 percent to 5 percent. As I said, by this time it was a clear no-brainer. The answer should have been close to 100 percent.
The next day I gave my second seminar and took another poll. "What are you working on?" was the question. Answers: QCD, QCD, QCD, QCD, QCD,........ Everyone was working on QCD. That's when I learned to ask "What are you doing?" instead of "what do you think?"

I saw exactly the same phenomenon more recently when I was working on black holes. This time it was after a string theory seminar, I think in Santa Barbara. I asked the audience to vote whether they agreed with me and Gerard 't Hooft or if they thought Hawking’s ideas were correct. This time I got a 50-50 response. By this time I knew what was going on so I wasn't so surprised. Anyway I later asked if anyone was working on Hawking's theory of information loss. Not a single hand went up. Don't ask what they think. Ask what they do.

Not necessarily a great metric; working on the second-most-probable theory can be the best rational decision if the expected value of working on the most probable theory is lower due to greater cost or lower reward.

This is why many scientists are terrible philosophers of science. Not all of them, of course; Einstein was one remarkable exception. But it seems like many scientists have views of science (e.g. astonishingly naive versions of Popperianism) which completely fail to fit their own practice.

Yes. When chatting with scientists I have to intentionally remind myself that my prior should be on them being Popperian rather than Bayesian. When I forget to do this, I am momentarily surprised when I first hear them say something straightforwardly anti-Bayesian.

I see. I doubt that it is as simple as naive Popperianism, however. Scientists routinely construct and screen hypotheses based on multiple factors, and they are quite good at it, compared to the general population. However, as you pointed out, many do not use or even have the language to express their rejection in a Bayesian way, as "I have estimated the probability of this hypothesis being true, and it is too low to care." I suspect that they instinctively map intelligence explosion into the Pascal mugging reference class, together with perpetual motion, cold fusion and religion, but verbalize it in the standard Popperian language instead. After all, that is how they would explain why they don't pay attention to (someone else's) religion: there is no way to falsify it. I suspect that any further discussion tends to reveal a more sensible approach.

Yeah. The problem is that most scientists seem to still be taught from textbooks that use a Popperian paradigm, or at least Popperian language, and they aren't necessarily taught probability theory very thoroughly, they're used to publishing papers that use p-value science even though they kinda know it's wrong, etc.

So maybe if we had an extended discussion about philosophy of science, they'd retract their Popperian statements and reformulate them to say something kinda related but less wrong. Maybe they're just sloppy with their philosophy of science when talking about subjects they don't put much credence in.

This does make it difficult to measure the degree to which, as Eliezer puts it, "the world is mad." Maybe the world looks mad when you take scientists' dinner party statements at face value, but looks less mad when you watch them try to solve problems they care about. On the other hand, even when looking at work they seem to care about, it often doesn't look like scientists know the basics of philosophy of science. Then again, maybe it's just an incentives problem. E.g. maybe the scientist's field basically requires you to publish with p-values, even if the scientists themselves are secretly Bayesians.

The problem is that most scientists seem to still be taught from textbooks that use a Popperian paradigm, or at least Popperian language

I'm willing to bet most scientists aren't taught these things formally at all. I never was. You pick it up out of the cultural zeitgeist, and you develop a cultural jargon. And then sometimes people who HAVE formally studied philosophy of science try to map that jargon back to formal concepts, and I'm not sure the mapping is that accurate.

they're used to publishing papers that use p-value science even though they kinda know it's wrong, etc.

I think 'wrong' is too strong here. Its good for some things, bad for others. Look at particle-accelerator experiments- frequentist statistics are the obvious choice because the collider essentially runs the same experiment 600 million times every second, and p-values work well to separate signal from a null-hypothesis of 'just background'.

For what it's worth, I understand well the arguments in favor of Bayes, yet I don't think that scientific results should be published in a Bayesian manner. This is not to say that I don't think that frequentist statistics is frequently and grossly mis-used by many scientists, but I don't think Bayes is the solution to this. In fact, many of the problems with how statistics is used, such as implicitly performing many multiple comparisons without controlling for this, would be just as large of problems with Bayesian statistics.

Either the evidence is strong enough to overwhelm any reasonable prior, in which case frequentist statistics wlil detect the result just fine; or else the evidence is not so strong, in which case you are reduced to arguing about priors, which seems bad if the goal is to create a societal construct that reliable uncovers useful new truths.

No, the multiple comparisons problem, like optional stopping, and other selection effects that alter error probabilities are a much greater problem in Bayesian statistics because they regard error probabilities and the sampling distributions on which they are based as irrelevant to inference, once the data are in hand. That is a consequence of the likelihood principle (which follows from inference by Bayes theorem). I find it interesting that this blog takes a great interest in human biases, but guess what methodology is relied upon to provide evidence of those biases? Frequentist methods.

If there was a genuine philosophy of science illumination it would be clear that, despite the shortcomings of the logical empiricist setting in which Popper found himself , there is much more of value in a sophisticated Popperian methodological falsificationism than in Bayesianism. If scientists were interested in the most probable hypotheses, they would stay as close to the data as possible. But in fact they want interesting, informative, risky theories and genuine explanations. This goes against the Bayesian probabilist ideal. Moreover, you cannot falsify with Bayes theorem, so you'd have to start out with an exhaustive set of hypotheses that could account for data (already silly), and then you'd never get rid of them---they could only be probabilistically disconfirmed.

Strictly speaking, one can't falsify with any method outside of deductive logic -- even your own Severity Principle only claims to warrant hypotheses, not falsify their negations. Bayesian statistical analysis is just the same in this regard.

A Bayesian analysis doesn't need to start with an exhaustive set of hypotheses to justify discarding some of them. Suppose we have a set of mutually exclusive but not exhaustive hypotheses. The posterior probability of an hypothesis under the assumption that the set is exhaustive is an upper bound for its posterior probability in an analysis with an expanded set of hypotheses. A more complete set can only make a hypotheses less likely, so if its posterior probability is already so low that it would have a negligible effect on subsequent calculations, it can safely be discarded.

But in fact they want interesting, informative, risky theories and genuine explanations. This goes against the Bayesian probabilist ideal.

I'm a Bayesian probabilist, and it doesn't go against my ideal. I think you're attacking philosophical subjective Bayesianism, but I don't think that's the kind of Bayesianism to which lukeprog is referring.

Unfortunately, we find ourselves in a world where the world's policy-makers don't just profess that AGI safety isn't a pressing issue, they also aren't taking any action on AGI safety. Even generally sharp people like Bryan Caplan give disappointingly lame reasons for not caring. :(

Why won't you update towards the possibility that they're right and you're wrong?

This model should rise up much sooner than some very low prior complex model where you're a better truth finder about this topic but not any topic where truth-finding can be tested reliably*, and they're better truth finders about topics where truth finding can be tested (which is what happens when they do their work), but not this particular topic.

(*because if you expect that, then you should end up actually trying to do at least something that can be checked because it's the only indicator that you might possibly be right about the matters that can't be checked in any way)

Why are the updates always in one direction only? When they disagree, the reasons are "lame" according to yourself, which makes you more sure everyone's wrong. When they agree, they agree and that makes you more sure you are right.

This model should rise up much sooner than some very low prior complex model where you're a better truth finder about this topic...

It's not so much that I'm a better truth finder, it's that I've had the privilege of thinking through the issues as a core component of my full time job for the past two years, and people like Caplan only raise points that have been accounted for in my model for a long time. Also, I think the most productive way to resolve these debates is not to argue the meta-level issues about social epistemology, but to have the object-level debates about the facts at issue. So if Caplan replies to Carl's comment and my own, then we can continue the object-level debate, otherwise... the ball's in his court.

Why are the updates always in one direction only? When they disagree, the reasons are "lame" according to yourself, which makes you more sure everyone's wrong. When they agree, they agree and that makes you more sure you are right.

This doesn't appear to be accurate. E.g. Carl & Paul changed my mind about the probability of hard takeoff. And when have I said that some public figure agreeing with me made me more sure I'm right? See also my comments here.

If I mention a public figure agreeing with me, it's generally not because this plays a significant role in my own estimates, it's because other people think there's a stronger correlation between social status and correctness than I do.

It's not so much that I'm a better truth finder, it's that I've had the privilege of thinking through the issues as a core component of my full time job for the past two years, and people like Caplan only raise points that have been accounted for in my model for a long time.

Yes, but why Caplan did not see it fit to think about the issue for a significant time, and you did?

There's also the AI researchers who have had the privilege of thinking about relevant subjects for a very long time, education, and accomplishments which verify that their thinking adds up over time - and who are largely the actual source for the opinions held by the policy makers.

By the way, note that the usual method of rejection of wrong ideas, is not even coming up with wrong ideas in the first place, and general non-engagement of wrong ideas. This is because the space of wrong ideas is much larger than the space of correct ideas.

What I expect to see in the counter-factual world where the AI risk is a big problem, is that the proponents of the AI risk in that hypothetical world have far more impressive and far more relevant accomplishments and credentials.

but to have the object-level debates about the facts at issue.

The first problem with highly speculative topics is that great many arguments exist in favour of either opinion on a speculative topic. The second problem is that each such argument relies on a huge number of implicit or explicit assumptions that are likely to be violated due to their origin as random guesses. The third problem is that there is no expectation that the available arguments would be a representative sample of the arguments in general.

This doesn't appear to be accurate. E.g. Carl & Paul changed my mind about the probability of hard takeoff.

Hmm, I was under the impression that you weren't a big supporter of the hard takeoff to begin with.

If I mention a public figure agreeing with me, it's generally not because this plays a significant role in my own estimates, it's because other people think there's a stronger correlation between social status and correctness than I do.

Well, your confidence should be increased by the agreement; there's nothing wrong with that. The problem is when it is not balanced by the expected decrease by disagreement.

What I expect to see in the counter-factual world where the AI risk is a big problem, is that the proponents of the AI risk in that hypothetical world have far more impressive and far more relevant accomplishments and credentials.

There are a great many differences in our world model, and I can't talk through them all with you.

Maybe we could just make some predictions? E.g. do you expect Stephen Hawking to hook up with FHI/CSER, or not? I think... oops, we can't use that one: he just did. (Note that this has negligible impact on my own estimates, despite him being perhaps the most famous and prestigious scientist in the world.)

Okay, well... If somebody takes a decent survey of mainstream AI people (not AGI people) about AGI timelines, do you expect the median estimate to be earlier or later than 2100? (Just kidding; I have inside information about some forthcoming surveys of this type... the median is significantly sooner than 2100.)

Okay, so... do you expect more or fewer prestigious scientists to take AI risk seriously 10 years from now? Do you expect Scott Aaronson and Peter Norvig, within 25 years, to change their minds about AI timelines, and concede that AI is fairly likely within 100 years (from now) rather than thinking that it's probably centuries or millennia away? Or maybe you can think of other predictions to make. Though coming up with crisp predictions is time-consuming.

Well, I too expect some form of something that we would call "AI", before 2100. I can even buy into some form of accelerating progress, albeit the progress would be accelerating before the "AI" due to the tools using relevant technologies, and would not have that sharp of a break. I even do agree that there is a certain level of risk involved in all the future progress including progress of the software.

I have a sense you misunderstood me. I picture this parallel world where legitimate, rational inferences about the AI risk exist, and where this risk is worth working at in 2013 and stands out among the other risks, as well as any other pre-requisites for making MIRI worthwhile hold. And in this imaginary world, I expect massively larger support than "Steven Hawkins hooked up with FHI" or what ever you are outlining here.

You do frequently lament that the AI risk is underfunded, under-supported, and there's under-awareness about it. In the hypothetical world, this is not the case and you can only lament that the rational spending should be 2 billions rather than 1 billion.

edit: and of course, my true rejection is that I do not actually see rational inferences leading there. The imaginary world stuff is just a side-note to explain how non-experts generally look at it.

edit2: and I have nothing against FHI's existence and their work. I don't think they are very useful, or address any actual safety issues which may arise, though, but with them I am fairly certain they aren't doing any harm either (Or at least, the possible harm would be very small). Promoting the idea that AI is possible within 100 years, however, is something that increases funding for AI all across the board.

I have a sense you misunderstood me. I picture this parallel world where legitimate, rational inferences about the AI risk exist, and where this risk is worth working at in 2013 and stands out among the other risks, as well as any other pre-requisites for making MIRI worthwhile hold. And in this imaginary world, I expect massively larger support than "Steven Hawkins hooked up with FHI" or what ever you are outlining here.

Right, this just goes back to the same disagreement in our models I was trying to address earlier by making predictions. Let me try something else, then. Here are some relevant parts of my model:

I expect most highly credentialed people to not be EAs in the first place.

I expect most highly credential people to be mostly just aware of risks they happen to have heard about (e.g. climate change, asteroids, nuclear war), rather than attempting a systematic review of risks (e.g. by reading the GCR volume).

I expect most highly credentialed people to respond fairly well when actuarial risk is easily calculated (e.g. asteroid risk), and not-so-well when it's more difficult to calculate (e.g. many insurance companies went bankrupt after 9/11).

I expect most highly credentialed people to have spent little time on explicit calibration training.

I expect most highly credentialed people to not systematically practice debiasing like some people practice piano.

I expect most highly credentialed people to know very little about AI, and very little about AI risk.

I expect that in general, even those highly credentialed people who intuitively think AI risk is a big deal will not even contact the people who think about AI risk for a living in order to ask about their views and their reasons for them, due to basic VoI failure.

I expect most highly credentialed people to have fairly reasonable views within their own field, but to often have crazy views "outside the laboratory."

I expect most highly credentialed people to not have a good understanding of Bayesian epistemology.

I expect most highly credentialed people to continue working on, and caring about, whatever their career has been up to that point, rather than suddenly switching career paths on the basis of new information and an EV calculation.

I expect most highly credentialed people to not understand lots of pieces of "black swan epistemology" like this one and this one.

The question should not be about "highly credentialed" people alone, but about how they fare compared to people who are rather very low "credentialed".

In particular, on your list, I expect people with fairly low credentials to fare much worse, especially at identification of the important issues as well as on rational thinking. Those combine multiplicatively, making it exceedingly unlikely - despite the greater numbers of the credential-less masses - that people who lead the work on an important issue would have low credentials.

I expect most highly credentialed people to not be EAs in the first place.

"However, there is something they value more than a man's life: a trowel."

"Why a trowel?"

"If a bricklayer drops his trowel, he can do no more work until a new one is brought up. For months he cannot earn the food that he eats, so he must go into debt. The loss of a trowel is cause for much wailing. But if a man falls, and his trowel remains, men are secretly relieved. The next one to drop his trowel can pick up the extra one and continue working, without incurring debt."

Hillalum was appalled, and for a frantic moment he tried to count how many picks the miners had brought. Then he realized. "That cannot be true. Why not have spare trowels brought up? Their weight would be nothing against all the bricks that go up there. And surely the loss of a man means a serious delay, unless they have an extra man at the top who is skilled at bricklaying. Without such a man, they must wait for another one to climb from the bottom."

All the pullers roared with laughter. "We cannot fool this one," Lugatum said with much amusement.

The merit of The Spy Who Came in from the Cold, then – or its offence, depending where you stood – was not that it was authentic, but that it was credible.

John LeCarre, explaining that he didn't have insider information about the intelligence community, and if he had, he would not have been allowed to publish The Spy Who Came in from the Cold, but that a great many people who thought James Bond was too implausible wanted to believe that LeCarre's book was the real deal.

Professor Zueblin is right when he says that thinking is the hardest work many people ever have to do, and they don't like to do any more of it than they can help. They look for a royal road through some short cut in the form of a clever scheme or stunt, which they call the obvious thing to do; but calling it doesn't make it so. They don't gather all the facts and then analyze them before deciding what really is the obvious thing.

There is one very valid test by which we may separate genuine, if perverse and unbalanced, originality and revolt from mere impudent innovation and bluff. The man who really thinks he has an idea will always try to explain that idea. The charlatan who has no idea will always confine himself to explaining that it is much too subtle to be explained. The first idea may be really outree or specialist; it may be really difficult to express to ordinary people. But because the man is trying to express it, it is most probable that there is something in it, after all. The honest man is he who is always trying to utter the unutterable, to describe the indescribable; but the quack lives not by plunging into mystery, but by refusing to come out of it.

The man who really thinks he has an idea will always try to explain that idea.

I don't think that's the case. There are plenty of shy intellectuals who don't push their ideas on other people. Darwin sat more than a decade on his big idea.

There are ideas that are about qualia. It doesn't make much sense to try to explain a blind person what red looks like and the same goes for other ideas that rest of observed qualia instead of resting on theory.
If I believe in a certain idea because I experienced a certain qualia and I have no way of giving you the experience of the same qualia, I can't explain you the idea. In some instances I might still try to explain the blind what red looks like but there are also instance where I see it as futile.

One way of teaching certain lessons in buddhism is to give a student a koan that illustrates the lesson and let him meditate over the koan for hours. I don't see anything dishonest about teaching certain ideas that way.

If someone thinks about a topic in terms of black and white it just takes time to teach him to see various shades of grey.

I discovered as a child that the user interface for reprogramming my own brain is my imagination. For example, if I want to reprogram myself to be in a happy mood, I imagine succeeding at a difficult challenge, or flying under my own power, or perhaps being able to levitate objects with my mind. If I want to perform better at a specific task, such as tennis, I imagine the perfect strokes before going on court. If I want to fall asleep, I imagine myself in pleasant situations that are unrelated to whatever is going on with my real life.

My most useful mental trick involves imagining myself to be far more capable than I am. I do this to reduce the risk that I turn down an opportunity just because I am clearly unqualified[...] As my career with Dilbert took off, reporters asked me if I ever imagined I would reach this level of success. The question embarrasses me because the truth is that I imagined a far greater level of success. That's my process. I imagine big.

Persecution for the expression of opinions seems to me perfectly logical. If you have no doubt of your premises or your power, and want a certain result with all your heart, you naturally express your wishes in law, and sweep away all opposition. To allow opposition by speech seems to indicate that you think the speech impotent, as when a man says that he has squared the circle, or that you do not care wholeheartedly for the result, or that you doubt either your power or your premises... But when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas -- that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out.

There is no glory, no beauty in death.
Only loss.
It does not have meaning.
I will never see my loved ones again.
They are permanently lost to the void.
If this is the natural order of things,
then I reject that order.
I burn here my hopelessness,
I burn here my constraints.
By my hand, death shall fall.
And if I fail, another shall take my place
... and another, and another,
until this wound in the world
is healed at last.

It works similarly for psychology. People who study psychology learn dozen different explanations of human thinking and behavior, so the smarter among them know these things are far from settled, and perhaps there is no simple answer that explains everything. On the other hand, some people just read a random book on psychology, and they believe they understand everything completely.

If you don’t study philosophy you’ll absorb it anyway, but you won’t know why or be able to be selective.

This seems true. What I am curious about is whether it remains true if you substitute "don't" with "do". Those that do study philosophy have not on average impressed me with their ability to discriminate among the bullshit.

In those first seconds, I'm always thinking some version of this: "Oh, no!!! This time is different. Now my arm is dead and it's never getting better. I'm a one-armed guy now. I'll have to start drawing left-handed. I wonder if anyone will notice my dead arm. Should I keep it in a sling so people know it doesn't work or should I ask my doctor to lop it off? If only I had rolled over even once during the night. But nooo, I have to sleep on my arm until it dies. That is so like me. What happens if I sleep on the other one tomorrow night? Can I learn to use a fork with my feet?"

Then at about the fifth second, some feeling returns to my arm and I experience hope. I also realize that if people could lose their arms after sleeping on them there wouldn't be many people left on earth with two good arms. Apparently the rational part of my mind wakes up last.

Another bad indication is when we feel sorry for people applying for the program. We used to fund people because they seemed so well meaning. We figured they would be so happy if we accepted them, and that they would get their asses kicked by the world if we didn't. We eventually realized that we're not doing those people a favor. They get their asses kicked by the world anyway.

Foundations matter. Always and forever. Regardless of domain. Even if you meticulously plug all abstraction leaks, the lowest-level concepts on which a system is built will mercilessly limit the heights to which its high-level “payload” can rise. For it is the bedrock abstractions of a system which create its overall flavor. They are the ultimate constraints on the range of thinkable thoughts for designer and user alike. Ideas which flow naturally out of the bedrock abstractions will be thought of as trivial, and will be deemed useful and necessary. Those which do not will be dismissed as impractical frills — or will vanish from the intellectual landscape entirely. Line by line, the electronic shanty town grows. Mere difficulties harden into hard limits. The merely arduous turns into the impossible, and then finally into the unthinkable.

[...]

The ancient Romans could not know that their number system got in the way of developing reasonably efficient methods of arithmetic calculation, and they knew nothing of the kind of technological paths (i.e. deep-water navigation) which were thus closed to them.

Somebody could give me this glass of water and tell me that it’s water. But there’s a lot of clear liquids out there and I might actually have a real case that this might not be water. Now most cases when something like a liquid is in a cup it’s water.

A good way to find out if it’s water is to test if it has two hydrogens per oxygen in each molecule in the glass and you can test that. If it evaporates like water, if it tastes like water, freezes like water… the more tests we apply, the more sure we can be that it’s water.

However, if it were some kind of acid and we started to test and we found that the hydrogen count is off, the oxygen count is off, it doesn’t taste like water, it doesn’t behave like water, it doesn’t freeze like water, it just looks like water. If we start to do these tests, the more we will know the true nature of the liquid in this glass. That is how we find truth. We can test it any number of ways; the more we test it, the more we know the truth of what it is that we’re dealing with.

You argue that it would be wrong to stab my neighbor and take all their
stuff. I reply that you have an ugly face. I commit the "ad hominem" fallacy
because I'm attacking you, not your argument. So one thing you could do is
yell "OI, AD HOMINEM, NOT COOL."

[...] What you need to do is go one step more and say "the ugliness of my
face has no bearing on moral judgments about whether it is okay to stab your
neighbor."

But notice you could've just said that without yelling "ad hominem" first! In
fact, that's how all fallacies work. If someone has actually committed a
fallacy, you can just point out their mistake directly without being a pedant
and finding a pat little name for all of their logical reasoning problems.

It's like when those stupid car buffs say "Hmmm...yeah, transmission fluid" when telling each other what they think is wrong rather than "It sounds like the part that changes the speed and torque with which the wheels turn with respect to the engine isn't properly lubricated and able to have the right hydraulic pressure, so you should add some green oil product."

Fallacy names are useful for the same reason any term or technical vocab are useful.

'But notice how you could've just you meant the quantity 1+1+1+1 without yelling "four" first! In fact, that's how all 'numbers' work. If someone is actually using a quantity, you can just give that quantity directly without being a mathematician and finding a pat little name for all of their quantities used.'

Fallacy names are great for chunking something already understood. The problem is that most people who appeal to them don't understand them, and therefore mis-use them. If they spoke in descriptive phrases rather than in jargon, there would be less of an illusion of transparency and people would be more likely to notice that there are discrepancies in usage.

For instance, most people don't understand that not all personal attacks are ad hominem fallacies. The quotation encourages that particular mistake, inadvertently. So it indirectly provides evidence for its own thesis.

"You have an ugly face, so you're wrong" is ad hominem. "You have an ugly face" is not. It's just a statement. Did the speaker imply the second part? Maybe... but probably not. It was probably just an insulting rejoinder.

Insults, i.e. "Attacking you, not your argument", is not what ad hominem is. It's a fallacy, remember? It's no error in reasoning to call a person ugly. Only when you conclude from this that they are wrong do you commit the fallacy.

So:

A: It's wrong to stab your neighbor and take their stuff. B: Your face is ugly. A: The ugliness of my face has no bearing on moral... B, interrupting: Didn't say it does! Your face is still ugly!

They did not logically entail it but they did conversationally implicate it (see CGEL, p. 33 and following, for the difference). As per Grice's maxim of relation, people don't normally bring up irrelevant information.

B, interrupting: Didn't say it does!

At which point A would be justified in asking, “Why did you bring it up then?” And even if B had (tried to) explicitly cancel the pragmatic implicature (“It's wrong to stab your neighbor and take their stuff” -- ”I won't comment on that; on a totally unrelated note, your face is ugly”), A would still be justified in asking “Why did you change the topic?”

B here is violating Grice's maxims. That's the point. He's not following the cooperative principle. He's trying to insult A (perhaps because he is frustrated with the conversation). So applying Gricean reasoning to deduce B's intended meaning is incorrect.

If A asks "why are you changing the subject?", B's answer would likely be something along the lines of "And your mother's face is ugly too!".

"You have an ugly face, so you're wrong" is ad hominem. "You have an ugly face" is not. It's just a statement. Did the speaker imply the second part? Maybe... but probably not.

I contest the empirical claim you are making about human behaviour. That reply in that context very nearly always constitutes arguing against the point the other is making. In particular, the example to which you are replying most definitely is an example of a fallacious ad hominem.

A: The ugliness of my face has no bearing on moral...

In common practice it does. The rules do change based on attractiveness. (Tangential.)

At which point, Polly decided that she knew enough of the truth to be going on with. The enemy wasn't men, or women, or the old, or even the dead. It was just bleedin' stupid people, who came in all varieties. And no one had the right to be stupid.

However, to set yourself against all the stupidity in the world is an insurmountable task.

"Professor, I have to ask, when you see something all dark and gloomy, doesn't it ever occur to you to try and improve it somehow? Like, yes, something goes terribly wrong in people's heads that makes them think it's great to torture criminals, but that doesn't mean they're truly evil inside; and maybe if you taught them the right things, showed them what they were doing wrong, you could change -"

Professor Quirrell laughed, then, and not with the emptiness of before. "Ah, Mr. Potter, sometimes I do forget how very young you are. Sooner you could change the color of the sky."

This seems a bit mangled. The original in The Republic talks about refusing to rule, not refusing to go into politics. Makes it a bit less of a snappy exhortation for your fellow monkeys to gang up on the other monkeys for the price of actually making more sense.

In a democratic republic of over 300 million people, whether or not you "participate in politics" has virtually no effect on whether your rulers are inferior or superior than yourself (unless "participate in politics" is a euphemism for coup d'état).

And you don't even need a majority of rationalists by headcount. You just need to find and hack the vulnerable parts of your culture and politics where you have a chance of raising people's expectations for rational decision making. Actual widespread ability in rationality skills comes later.

Whenever you feel pessimistic about moving the mean of the sanity distribution, try reading the Bible or the Iliad and see how far we've come already.

You just need to find and hack the vulnerable parts of your culture and politics where you have a chance of raising people's expectations for rational decision making.

People don't expect rational decision making from politics, because that's not what politics is for. Politics exists for the sake of power (politics), coordination and control, and of tribalism, not for any sort of decision making. When politicians make decisions, they optimize for political purposes, not for anything external such as economic, scientific, cultural, etc. outcomes.

When people try make decisions to optimize something external like that, we don't call them politicians; we call them bureaucrats.

If you tried to do what you suggest, you would end up trying not to improve or reform politics, but to destroy destroy it. Good luck with that.

Whenever you feel pessimistic about moving the mean of the sanity distribution, try reading the Bible or the Iliad and see how far we've come already.

Depends on who "we" are. A great many people still believe in the Bible and try to emulate it, or other comparable texts.

A little cynical maybe? Politicians don't spend 100% of the time making decisions for purely political reasons. Sometimes they are trying to achieve something, even if broadly speaking the purposes of politics are as you imply.

But of course, most of the people we would prefer to be more rational don't know that's what politics is for, so they aren't hampered by that particular excuse to give up on it. Anyway, they could quite reasonably expect more rational decision making from co-workers, doctors, teachers and others.

I don't think the people making decisions to optimise an outcome are well exemplified by bureaucrats. Try engineers.

Knowing that politics is part of what people do, and that destroying it is impossible, yes I would be trying to improve it, and hope for a more-rational population of participants to reform it. I would treat a claim that the way it is now is eternal and unchangeable as an extraordinary one that's never been true so far. So, good luck with that :)

You aren't seriously suggesting the mean of the sanity distribution hasn't moved a huge amount since the Bible was written? Or even in the last 100 years? I know I'm referring to a "sanity distribution" in an unquantifiable hand-wavy way, but do you doubt that those people who believe in a literalist interpretation of the Bible are now outliers, rather that the huge majority they used to be?

Politicians don't spend 100% of the time making decisions for purely political reasons. Sometimes they are trying to achieve something, even if broadly speaking the purposes of politics are as you imply.

Certainly, they're often trying to achieve something outside of politics in order to gain something within politics. We should strive to give them good incentives so the things they do outside of politics are net benefits to non-politicians.

most of the people we would prefer to be more rational don't know that's what politics is for, so they aren't hampered by that particular excuse to give up on it

So teaching them to be more rational would cause them to be less interested in politics, instead of demanding that politicians be more rational-for-the-good-of-all. I'm not sure if that's a good or bad thing in itself, but at least they wouldn't waste so much time obsessing over politics. Being apolitical also enhances cooperation.

they could quite reasonably expect more rational decision making from co-workers, doctors, teachers and others.

That's very true, it just has nothing to do with politics. I'm all for making people more rational in general.

Knowing that politics is part of what people do, and that destroying it is impossible, yes I would be trying to improve it, and hope for a more-rational population of participants to reform it

Politicians can be rational. It's just that they would still be rational politicians - they would use their skills of rationality to do more of the same things we dislike them for doing today. The problem isn't irrationally practiced politics, it's politics itself.

I would treat a claim that the way it is now is eternal and unchangeable as an extraordinary one that's never been true so far.

It's changed a lot over the past, but not in this respect: I think no society on the scale millions of people has ever existed that wasn't dominated by one or another form of politics harmful to most of its residents.

You aren't seriously suggesting the mean of the sanity distribution hasn't moved a huge amount since the Bible was written? Or even in the last 100 years? I know I'm referring to a "sanity distribution" in an unquantifiable hand-wavy way, but do you doubt that those people who believe in a literalist interpretation of the Bible are now outliers, rather that the huge majority they used to be?

Indeed, it depends on how you measure sanity. On the object level of the rules people follow, things have gotten much better. But on the more meta level of how people arrive at beliefs, judge them, and discard them, the vast majority of humanity is still firmly in the camp of "profess to believe whatever you're taught as a child, go with the majority, compartmentalize like hell, and be offended if anyone questions your premises".

A democratic republic is not necessary. In any kind of political regime encompassing 300 million people, your participation in politics has very small expected effect on whether your rulers are inferior to you.

I believe that the final words man utters on this Earth will be "It worked!", it'll be an experiment that isn't misused, but will be a rolling catastrophe. (...) Curiosity killed the cat, and the cat never saw it coming.

For the most part the objects which approve themselves to us are not so much the award of well-deserved certificates --- which is supposed by the mass of unthinking people to be the main object --- but to give people something definite to work for; to counteract the tendency to sipping and sampling which so often defeats the aspirations of gifted beings,...

--- Sir Hubert Parry, speaking to The Royal College of Music about the purpose of music examinations

Initially I thought this a wonderful quote because, looking back at my life, I could see several defeats (not all in music) attributable to sipping and sampling. But Sir Hubert is speaking specifically about music. The context tells you Sir Hubert's proposed counter to sipping and sampling: individual tuition aiming towards an examination in the form a viva.

The general message is "counter the tendency to sipping and sampling by finding something definite to work for, analogous to working ones way up the Royal College of Music grade system". But working out the analogy is left as an exercise for the reader, so the general message, if Sir Hubert intended it at all, is rather feeble.

Secondly, you might have the nagging feeling that not much has happened, really. We wanted an answer to the question "What is truth?", and all we got was trivial truths-equivalences, and a definition of truth for sentences with certain expressions, that showed up again on the right-hand side of that very definition. If that is on your mind, then you should go back to the beginning of this lecture, and ask yourself, "What kind of answer did expect?" to our initial question. Reconsider, "What is 'grandfather-hood'?". Well, define it in familiar terms. What is 'truth'? Well, define it in familiar terms. That's what we did. If that's not good enough, why?

by Hannes Leitgeb, from his joint teaching course with Stephan Hartmann (author of Bayesian Epistemology) on Coursera entitled 'An Introduction to Mathematical Philoosphy'.

The course topics are "Infinity, Truth, Rational Belief, If-Then, Confirmation, Decision, Voting, and Quantum Logic and Probability". In many ways, a very LW-friendly course, with many mentions and discussions of people like Tarski, Gödel etc.

Yes, but it can be either a bad sign about what you're trying to talk yourself into, or about your state of mind. It simply means that your previous position was held strongly - not because of strong rational evidence alone, because stronger evidence can override that - the act of assimilating the information precludes talking yourself into it. If you have to talk yourself into something, it probably means that there is an irrational aspect to your attachment to the alternative.

And that irrational, often emotional attachment can be either right or wrong; were this not true, gut feeling would answer every question truthfully, and the first plausible explanation one could think of would always be correct.

Y'know, there are all sorts of counterexamples to this ... but I think its still a bad sign, if not a definitive one, on the basis that if I had been more suspicious of things I was talking myself into I would have had a definite net benefit to my life. (Not counting times I was neurohacking myself, admittedly, but that's not really the same.)

Furthermore, to achieve justice -- to deter, to exact retribution, to make whole the victim, or to heal the sick criminal, whichever one or more of these we take to be the goal of justice -- we must almost always respond to force with force. Taken in isolation that response will itself look like an initiation of force. Furthermore, to gather the evidence we need in most cases to achieve sufficient high levels of confidence -- whether balance of the probabilities, clear and convincing evidence, or beyond a reasonable doubt -- we often have to initiate force with third parties -- to compel them to hand over goods, to let us search their property, or to testify. If politics could be deduced this might be called the Central Theorem of Politics -- we can't properly respond to a global initiation of force without local initiations of force.

1) It uses a different and wider category of examples. Viz. "initiate force [...] to compel them to hand over goods, to let us search their property, or to testify."

2) It makes a consequentialist claim about forcing people to e.g. let us search their property for evidence: "we can't properly respond to a global initiation of force without local initiations of force."

The second difference here is important because it directly contradicts the typical libertarian claim of "if we force people to do things much less than we currently do, that will lead to good consequences." The first difference is rhetorically important because it is a place where people's gut reaction is more likely to endorse the use of force, and people have been less exposed to memes about forcibly searching peoples' property (compared to the ubiquity of people disliking taxes) that would cause them to automatically respond rather than thinking.

The second difference here is important because it directly contradicts the typical libertarian claim of "if we force people to do things much less than we currently do, that will lead to good consequences."

Actually that isn't what Szabo is saying. His point is to contradict the claim of the anarcho-capitalists that "if we never force people to do things, that will lead to good consequences."

I was instructed long ago by a wise editor, "If you understand something you can explain it so that almost anyone can understand it. If you don't, you won't be able to understand your own explanation." That is why 90% of academic film theory is bullshit. Jargon is the last refuge of the scoundrel.

Based on the Hebrew original a more accurate translation would be: "The beginning of knowledge is to acquire knowledge, and in all of your acquisitions acquire understanding" pointing to two important principles.
1. First to gain the relevant body of knowledge and only then to begin theorizing 2. to focus our wealth and energy on knowledge

The wisdom books of the Bible are pretty unusual compared to the rest of the Bible, because they're an intrusion of some of the best surviving wisdom literature. As such, they're my favorite parts of the Bible, and I've found them well worth reading (in small doses, a little bit at a time, so I'm not overwhelmed).

I'm avoiding the term "free will" here because experience shows that using that term turns into a debate about the definition. I prefer to say we're all just particles bumping around. Personally, I don't see how any of those particles, no matter how they are arranged, can sometimes choose to ignore the laws of physics and go their own way.

For purely practical reasons, the legal system assigns "fault" to some actions and excuses others. We don't have a good alternative to that system. But since we are all a bunch of particles bumping around according to the laws of physics (or perhaps the laws of our programmers) there is no sense of "fault" that is natural to the universe.

I prefer to say we're all just particles bumping around. Personally, I don't see how any of those particles, no matter how they are arranged, can sometimes choose to ignore the laws of physics and go their own way.

I personally can't see how a monkey turns into a human. But that's irrelevant because that is not the claim of natural selection. This makes a strawman of most positions that endorse something approximately like free will. Also:

For purely practical reasons, the legal system assigns "fault" to some actions and excuses others.

Just the legal system? Gah. Everybody on earth does this about 200 times a day.

He had also learned that the sick and unfortunate are far more receptive to traditional magic spells and exorcisms than to sensible advice; that people more readily accept affliction and outward penances than the task of changing themselves, or even examining themselves; that they believe more easily in magic than reason, in formulas than experience.

Trouble rather the tiger in his lair than the sage among his books. For to you kingdoms and their armies are things mighty and enduring, but to him they are but toys of the moment, to be overturned with the flick of a finger.

In sports, […] arguments are not particularly damaging—in fact, they can be fun. The problem is that these same biased processes can influence how we experience other aspects of our world. These biased processes are in fact a major source of escalation in almost every conflict, whether Israeli-Palestinian, American-Iraqi, Serbian-Croatian, or Indian-Pakistani.

In all these conflicts, individuals from both sides can read similar history books and even have the same facts taught to them, yet it is very unusual to find individuals who would agree about who started the conflict, who is to blame, who should make the next concession, etc. In such matters, our investment in our beliefs is much stronger than any affiliation to sport teams, and so we hold on to these beliefs tenaciously. Thus the likelihood of agreement about “the facts” becomes smaller and smaller as personal investment in the problem grows. This is clearly disturbing. We like to think that sitting at the same table together will help us hammer out our differences and that concessions will soon follow. But history has shown us that this is an unlikely outcome; and now we know the reason for this catastrophic failure.

But there’s reason for hope. In our experiments, tasting beer without knowing about the vinegar, or learning about the vinegar after the beer was tasted, allowed the true flavor to come out. The same approach should be used to settle arguments: The perspective of each side is presented without the affiliation—the facts are revealed, but not which party took which actions. This type of “blind” condition might help us better recognize the truth.

In all these conflicts, individuals from both sides can read similar history books and even have the same facts taught to them, yet it is very unusual to find individuals who would agree about who started the conflict, who is to blame, who should make the next concession, etc.

In my experience, who started the conflict, who is to blame, etc. is explicitly taught as fact to each side's children. Israelis and Palestinians don't agree on facts at all. A civilized discussion of politics generally requires agreeing not to discuss most past facts.

Here’s the bigger point: Americans (and maybe all humans, I’m not sure) are more obsessed with words than with their meanings. I will never understand this as long as I live. Under FCC rules, in broadcast TV you can talk about any kind of depraved sex act you wish, as long as you do not use the word “fuck.” And the word itself is so mysteriously magical that it cannot be used in any way whether the topic is sex or not. “What the fuck?” is a crime that carries a stiff fine –– “I’m going to rape your 8-year-old daughter with a trained monkey,” is completely legal. In my opinion, today’s “gluten-free” cartoon is far more suggestive in an unsavory way than the vampire cartoon, but it doesn’t have a “naughty” word so it’s okay.

Are we a nation permanently locked in preschool? The answer, in the case of language, is yes.

"The FCC has defined broadcast indecency as “language or material that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards for the broadcast medium, sexual or excretory organs or activities.”

I am fairly sure that "I’m going to rape your 8-year-old daughter with a trained monkey" would count as describing sexual activities in patently offensive terms, and would not be allowed when direct use of swear words would not be allowed. Just because you don't use a list of words doesn't mean that what you say will be automatically allowed.

Furthermore, the Wikipedia page on the seven words ( http://en.wikipedia.org/wiki/Seven_dirty_words ) points out that " The FCC has never maintained a specific list of words prohibited from the airwaves during the time period from 6 a.m. to 10 p.m., but it has alleged that its own internal guidelines are sufficient to determine what it considers obscene." It points out cases where the words were used in context and permitted.

In other words, this quote is based on a sound-bite distortion of actual FCC behavior and as inaccurate research, is automatically ineligible to be a good rationality quote.

I am fairly sure that "I’m going to rape your 8-year-old daughter with a trained monkey" would count as describing sexual activities in patently offensive terms, and would not be allowed when direct use of swear words would not be allowed.

What is the basis for you being sure?

Howard Stern, a well-known "shock jock" spent many years on airwaves regulated by the FCC. He more or less specialized in "describing sexual activities in patently offensive terms" and while he had periodic run-ins with the FCC, he, again, spent many years doing this.

The FCC rule is deliberately written in a vague manner to give the FCC discretionary power. As a practical matter, the seven dirty words are effectively prohibited by FCC and other offensive expressions may or may not be prohibited. Broadcasters occasionally test the boundaries and either get away with it or get slapped down.

I don't buy it. Even if we accept for the sake of argument that limiting sexual references on broadcast TV is a good plan (a point that I don't consider settled, by the way), using dirty words as a proxy runs straight into Goodhart's law: the broadcast rules are known in advance, and innuendo's bread and butter to TV writers. A good Schelling point has to be hard to work around, even if you can't draw a strict line; this doesn't qualify.

True, but its not clear morals have saved us from this. Many of our morals emphasize loyalty to our own groups (e.g. the USA) over our out groups (e.g. the USSR), with less than ideal results. I think if I replaced "morality" with "benevolence" I'd find the quote more correct. I likely read it too literally.

Isherwood was evidently anxious to convince the youth that the relationship he desired was that of lovers and friends rather than hustler and client; he felt possessive and was jealous of Bubi's professional contacts with other men, and the next day set off to resume his attempt to transform the rent boy into the Ideal Friend. Coached by Auden, whose conversational German was a good deal better than his own at this stage, he delivered a carefully prepared speech; he had, however, overlooked the Great Phrase-book Fallacy, and was quite unable to understand Bubi's reply.

Now, what I want is, Facts. Teach these boys and girls nothing but Facts. Facts alone are wanted in life. Plant nothing else, and root out everything else. You can only form the minds of reasoning animals upon Facts: nothing else will ever be of any service to them.

--Mr. Gradgrind, from Hard Times by Charles Dickens.

The character is portrayed as a villain, but this quote struck me as fair (if you take a less confused view of "Facts" than Gradgrind).

Facts alone are fairly useless without processes for using them to gather more. A piece of paper can have facts inscribed upon it more durably than the human mind can, yet we rely on the latter rather than the former to guide us through life because it is capable of using those facts, not merely possessing them.

"Aw, you can't feed your family on minimum wage? well who told you to start a fucking family when your skills are only worth minimum wage?"

Pax Dickinson, former Chief Technology Officer at Business Insider, on rational family planning in the context of modern capitalism.

(in response to "That's perhaps an argument for the parents to starve but the children are moral innocents wrt their creation. Solutions?") - "If you remove all consequences to children from their parents stupid behavior, how will they ever learn any better?"

Him again on personal responsibility, setting proper incentives for the lazy masses, and learning one's place in society early on.

This isn't rational. It's just elitist snobbery. You can use the exact same structure of argument with respect to anything:

Aw, you got raped? Well who told you to go into a room with your friend without a handgun on you? Didn't you know you should be prepared to kill every man around you in case they turn on you?

Structurally identical.

It's an ideology of knives in the dark, the screams of the dying and enslaved, and the blood red light of fire on steel. Those who honestly endorse its underlying principles would just as happily endorse any barbarism on the strength of the defeated's inability to escape it, provided it went on at some suitable distance from them.

Why not be honest and sum up the only real thing it says? - Vae victus.

If you ignore differences in probability of outcome, you'll end up conflating arguments of enormously difficult meaningful content. For instance, both of the above also have the same structure as

Aw, you broke your leg? Well, who told you to jump off the roof of a three story building?

That an argument have the same structure need not imply that they be equally valid, if the implications of the premises are different.

Getting raped may be a possible consequence of walking into a room with a friend without a means to defend oneself, but it's by no means a probable consequence, and we have to weigh risks against the limitations precautions impose on us. If the odds of rape in that circumstance were, say, a predictable eighty percent, then for all that the advice pattern matches to the widely condemned act of "victim shaming," walking into the room without a means of self defense was a bad idea (disregarding for the sake of an argument of course everything that led to that risk arising in the first place.)

It is true that a woman in such a situation would be well advised to arm herself. However, a complaint about being raped - personal emotional traumas aside - would be a complaint about the necessity of doing so as much as anything else. The response that she should'a armed herself then doesn't address the real meat of the issue; what sort of society we live in, how we want to relate to one another; whether we're to respond with compassion or dismissive brutalism (or at what point on that scale.)

There are things that are the result of natural laws - you jump off a building with no precautions, then you're probably gonna go splat. It makes limited sense to interpret those as complaints about the laws of physics. So, the balance in those cases swings more towards preventative advice in a way that's rarely the case with issues that are the result of human action.

There's certainly a concern, very pressing in the case of the rape example, that if the risk is too high then there's a responsibility upon society to mitigate it. In the case of the jumping off the roof example, building codes could mandate that the building be made impossible to jump off of or the surroundings be cushioned, but in this case most people would probably agree that the costs on society are too high to be justified in light of the minimal and easily avoidable risk. The case of the minimum wage worker falls somewhere in the middle ground between these, where the consequences are highly predictable, and the actions that would cause them avoidable, but with a significant cost of avoidance, like being unable to trust one's acquaintances, and unlike being unable to jump off a roof. And of course, as in both the other examples, limiting that risk comes with an associated cost.

Whether society should be structured to allow people to raise families while working on minimum wage is a question of cost/benefit analysis, which in this case is likely to be quite difficult, so it doesn't help to declare the question structurally similar to other, easier questions of cost/benefit analysis.

I don't disagree with you on any particular point there. However, the quote I was responding to wasn't, as I see it, attempting to explore the cost/benefit of raising minimum wage or subsidising the future of children. It was stating that they just shouldn't have kids - and in that much represented an effective blank cheque. That seems the opposite of your, much more nuanced, approach; bound by implications of fact and reason that are going to be specific to particular issues and cases and thus can't be generalised in the same way.

"You will begin to touch heaven, Jonathan, in the moment that you touch perfect speed. And that isn't flying a thousand miles per hour, or a million, or flying at the speed of light. Because any number is a limit, and perfection doesn't have limits. Perfect speed, my son, is being there."