What would it look like if someone was truly curious — if they actually wanted true beliefs? Not someone who wanted to feel like they sought the truth, or to feel their beliefs were justified. Not someone who wanted to signal a desire for true beliefs. No: someone who really wanted true beliefs. What would that look like?

A truly curious person would seek to understand the world as broadly and deeply as possible. They would study the humanities but especially math and the sciences. They would study logic, probability theory, argument, scientific method, and other core tools of truth-seeking. They would inquire into epistemology, the study of knowing. They would study artificial intelligence to learn the algorithms, the math, the laws of how an ideal agent would acquire true beliefs. They would study modern psychology and neuroscience to learn how their brain acquires beliefs, and how those processes depart from ideal truth-seeking processes. And they would study how to minimize their thinking errors.

They would practice truth-seeking skills as a musician practices playing her instrument. They would practice "debiasing" techniques for reducing common thinking errors. They would seek out contexts known to make truth-seeking more successful. They would ask others to help them on their journey. They would ask to be held accountable.

They would not flinch away from experiences that might destroy their beliefs. They would train their emotions to fit the facts.

They would update their beliefs quickly. They would resist the human impulse to rationalize.

But even all this could merely be a signaling game to increase their status in a group that rewards the appearance of curiosity. Thus, the final test for genuine curiosity is behavioral change. You would find a genuinely curious person studying and learning. You would find them practicing the skills of truth-seeking. You wouldn't merely find them saying, "Okay, I'm updating my belief about that" — you would also find them making decisions consistent with their new belief and inconsistent with their former belief.

Every week I talk to people who say they are trying to figure out the truth about something. When I ask them a few questions about it, I often learn that they know almost nothing of logic, probability theory, argument, scientific method, epistemology, artificial intelligence, human cognitive science, or debiasing techniques. They do not regularly practice the skills of truth-seeking. They don't seem to say "oops" very often, and they change their behavior even less often. I conclude that they probably want to feel they are truth-seeking, or they want to signal a desire for truth-seeking, or they might even self-deceivingly "believe" that they place a high value on knowing the truth. But their actions show that they aren't trying very hard to have true beliefs.

More or less accurate, though of course there's a ton of stuff that was left unsaid. "Study fields like these" is easy to say, "learn skills like these" is really difficult. There's no easy way to communicate skills, and sanity is a skill-set. You kinda just have to hope people have enough lucidity to make connections between fields and reliably see single step implications, and enough ambition to seek out and learn the skills from better thinkers than themselves. E.g. thanks to having brilliant friends I know a lot of cognitive tricks and verbal patterns that can't be learned from books or blog posts. Before I had these skills I wasn't a very good truth-seeker, and I'm sure a lot of people get stuck in that valley. Luckily a surplus of lucidity and reflectivity gets you a long way by itself.

Yes, getting skilled is hard, but reading this site helped me immensely.
Yet, my experience is that my peers don't believe me when I say that studying fields like philosophy and rationality help people think, and my (Catholic) English teacher doesn't even want to hear anything about these topics. Getting over social politics is pretty difficult for me.

Would you be willing to try to explain your ideas about why chess is important (for people like me who don't particularly like chess) and maybe talk about the "cognitive tricks and verbal patterns"?

Something is missing here. Curiosity about what? Are we only supposed to care about having true beliefs or are we supposed to care about what those true beliefs tell us about the world? I'm betting on the latter. In fact, I would go further and say that it is better to have lots of literally false but approximately true beliefs about the world than to have only a few, completely true beliefs about the world or even to have the same number of completely true beliefs but not have them cover as much interesting territory.

I don't have a theory of what makes intellectual territory more or less interesting, but I am pretty sure that content matters. I could have infinite curiosity and complete commitment to having true beliefs and still never know anything really interesting if my curiosity is about the results of simple addition problems and I am careful in carrying out those additions.

One possible reason this may have been downvoted is that Less Wrong-ers tend not to distinguish between "true beliefs" and "what those true beliefs tell us about the world". Okay, I may be committing mind projection fallacy, I don't know. At least I think of them as kind of the same thing if those "true beliefs" are fundamental enough (which, it's worth point out, kind of makes them "more true" if you have a reductionist viewpoint).

For example, knowledge of addition may tell you little in itself, but if you think about addition, it's an abstraction of a useful operation that holds for any kind of object, which implicitly claims (it seems to me) that some physical laws are universal. The same idea could lead you to the notion of logic (since it has the idea that you can make universal statements about form, versus content).

I hope that's not the reason for the downvote, because that completely misses the point of my comment.

My point is basically that the advice Luke gives -- while very good advice -- is not advice that follows from the simple desire to believe true things.

Believing true things is great. Believing true, interesting, useful things is better. Believing true, interesting, useful things while not believing false, trivial, useless things is better still. The content of our beliefs matters and that fact should be up front in the goal of rational inquiry. Again, the goal of rational inquiry is not simply to have true beliefs.

If simply having as many true beliefs as possible were really the goal of rational inquiry, then the best strategy would be to believe everything (assuming that is possible). By believing everything, you believe all of the true things. Sure, you also believe a lot of false things, but Luke didn't ask about what it would look like if people really wanted to avoid believing falsehoods.

I don't know if I was especially unclear in the earlier comment or if I was too uncharitable in my reading of Luke's post. Whatever. It is still the case that the goal of maximizing the number of true beliefs one holds is a bad goal. A slightly better goal is to maximize the number of true beliefs one has while minimizing the number of false beliefs that one has. But such a rule leads, I think, to a strategy of acquiring safe but trivial beliefs. For example, the belief that 1293 cubed is equal to the sum of three consecutive prime numbers. Such a belief does not have nearly the same utility of the belief that bodies in rectilinear motion tend to remain in rectilinear motion unless impressed upon by a force, even if the latter belief is only approximately true.

I am not picking on Luke's advice. The advice is great. I am picking on his starting point. I don't think the goal as stated leads to the advice. Something is missing from the goal as stated.

Every week I talk to people who say they are trying to figure out the truth about something. When I ask them a few questions about it, I often learn that they know almost nothing of logic, probability theory, argument, scientific method, epistemology, artificial intelligence, human cognitive science, or debiasing techniques...I conclude that they probably want to feel they are truth-seeking, or they want to signal a desire for truth-seeking, or they might even self-deceivingly "believe" that they place a high value on knowing the truth. But their actions show that they aren't trying very hard to have true beliefs.

Really? What percent of people are aware of the existence of cognitive biases? One percent? At least I wouldn't expect more than that to realize that probability theory or artificial intelligence bear upon questions in seemingly unrelated fields like philosophy or medicine.

And of people who know of the existence of cognitive biases, how many are even capable of genuinely entertaining the thought that they themselves might be biased, as opposed to Rush Limbaugh or unethical pharmaceutical researchers or all those silly people who disagree with them?

And of people who are worried about cognitive biases, how many have access to "debiasing techniques"? I'm not going to put a percent on this one because it's pretty vague, but outside of Less Wrong and a few ahead-of-the-game finance companies, you can't exactly go on Amazon and buy Debiasing for Dummies.

I think I agree with the conclusion (well, maybe, since I don't know enough psychodynamics to really be able to cash out a phrase like "their actions show that they aren't trying very hard) but this particular argument breaks Hanlon's Razor aka the Generalized Anti-Hanson Principle.

In their search for "true beliefs" they would quickly discover that there is no such thing as "actually true" but that science deals in more and more viable models. So, they would abandon their search for "truth" and would go onward to the search for better and better fitting models. (See definitions of science ... and, for a philosophical point of view, radical constructivism).

Our senses don't perceive the "real" world. They build a highly refined and effective illusion of complete perception. (See for example the blind spot in the eye, the physiology of color perception, or our sense of hearing.

Likewise, our minds always use simplified models. No one would be able to catch a falling ball if he had to actually calculate the flight curve - yet, even children are able to do it.
That's because if you keep your eyes fixed at the ball and have to lower your head in a constant way you are standing at the right spot to catch the ball. (If your rate of head-lowering is slower you have to move till it is.)

So, doing the actual calculations would be a waste of time, because there's a simpler way to catch a ball.

Even if one uses always the best and newest models science provides, you will never, ever be really at the frontier because so many papers are published every day. And even if one could, scientists are able to err. And do so frequently.

(See for example selection bias. There are LOTS of papers about it.)

So, someone on a quest to find "truth" is a romantic twerp who will accomplish nothing, because he will expect to find something static and final. Science and understanding are processes.

The only fields of human endeavor where you find "truth" are mathematics and religion.

(Most things I mention in this article are findable in the wikipedia. If you don't understand, look them up. Everyone should now about scientific models, "truth" and constructivsm. Oh, and the physiology of our senses.)

I'm confused. If there's no such thing as the actually true, how are we to understand the claims you make in this comment? Are they not actually true? I take it you don't think they're false. Perhaps they're what you call viable models? But of what? And if you do want to say that they're viable models, you would be saying that its true that they're viable models. But then is the claim that they're viable models itself a viable model? Doesn't this lead to a vicious regress?

Also, they would seek to personally become an immortal super-intelligence, since many truths simply can't be learned by an unenhanced human, and certainly not within a human lifetime.

(Which is why the Yudkowsky-Armstrong Fun-Theoretic Utopia leaves me cold. Would any curious person not choose to become superintelligent and "have direct philosophical conversations with the Machines" if the only alternative is to essentially play the post-Singularity equivalent of the World of Warcraft?)

On a grand scale, my hunger for truths is probably as limited and easy to satisfy as my hunger for cheeseburgers. I do feel that in a post-Singularity world I'd want to enhance my intelligence, but the underlying motivation seems to be status-seeking, a desire to be significant.

On a grand scale, my hunger for truths is probably as limited and easy to satisfy as my hunger for cheeseburgers.

I have very good reasons to think that my hunger for cheeseburgers is limited and easy to satisfy (e.g., ample evidence from past consumption/satiation of various foods including specifically cheeseburgers). On the other hand, there seems good reason to suspect that if my appetite for truths is limited, the satiation level comes well after what can be achieved at human intelligence level and within a human lifetime (e.g., there are plenty of questions I want answers to that seem very hard, and every question that gets answered seems to generate more interesting and even harder questions).

(It's an interesting question whether all my questions could be answered within 1 second after the Singularity occurs, or if it would require the more than the resources in our entire light cone, or something in between, but the answer to that doesn't affect my point that a curious person would seek to become superintelligent.)

I do feel that in a post-Singularity world I'd want to enhance my intelligence, but the underlying motivation seems to be status-seeking, a desire to be significant.

If Omega offered to enhance your intelligence and/or answer all your questions, but for your private benefit only (i.e., you couldn't tell anyone else or otherwise use your improved intelligence/knowledge to affect the world), would you not be much interested?

It's an interesting question whether all my questions could be answered within 1 second after the Singularity occurs, or if it would require the more than the resources in our entire light cone, or something in between, but the answer to that doesn't affect my point that a curious person would seek to become superintelligent.

If Omega offered to enhance your intelligence and/or answer all your questions, but for your private benefit only (i.e., you couldn't tell anyone else or otherwise use your improved intelligence/knowledge to affect the world), would you not be much interested?

If Omega offered to enhance your intelligence and/or answer all your questions, but for your private benefit only (i.e., you couldn't tell anyone else or otherwise use your improved intelligence/knowledge to affect the world), would you not be much interested?

I would be interested, but I wouldn't take it unless I got a solid technical explanation of "affect the world" that allowed me to do at least as much as I am doing now.

No, I wouldn't be much interested, I'd even pay to refuse the offer because I don't want the frustration of being unable to tell anyone.

You aren't willing to just console yourself with all the hookers, cars, drugs, holidays and general opulence you have been able to buy with the money you earned with your 'personal benefit only' intelligence? Or are we to take it that we can't even use the intelligence to benefit ourselves materially and can only use it to sit in a chair and think to ourselves?

Worst-case (and probable) scenario, you get trapped inside your head and forced to watch your body act like an idiot. If you could engage in transactions, you could make lots of money and then selectively do business with people you like.

Something I learned viscerally while I was recovering from brain damage is that intelligence is fun. I suspect I'd want to enhance my intelligence in much the same way that I'd want to spend more time around puppies.

I have an IQ in the 140-ish range. (At least, that's what the professionally administered test I had when I was a child said. Online IQ tests tell me I've lost 20 IQ points in the intervening years. Make of that what you will.)

I would estimate I regularly converse "in real life" with someone of above-average IQ a few times a year. This is just a guess, of course. One indicator of the accuracy of this assessment (granting that education as a proxy for intelligence isn't perfect) is that no one in my circle of friends or family that I regularly communicate with has ever went to, or graduated from anything greater than high school.

That's certainly true. "If you're routinely the smartest guy in the room, find a different room."

And yeah, in a "post-Singularity" world that contained a lot of different ranges of intelligence I would probably tune my intelligence to whatever range I was interacting with regularly, which might involve variable intelligence levels, or even maintaining several different disjoint chains of experience.

And I'm perfectly prepared to believe that past a certain point the negative tradeoffs of marginal increases in intelligence outweigh the benefits.

But at least up to that threshold, I would likely choose to socialize with other people who tuned themselves up to that level. It's admittedly an aesthetic preference, but it's mine.

(Which is why the Yudkowsky-Armstrong Fun-Theoretic Utopia leaves me cold. Would any curious person not choose to become superintelligent and "have direct philosophical conversations with the Machines" if the only alternative is to essentially play the post-Singularity equivalent of the World of Warcraft?)

I was under the impression that in Yudkowsky's image of utopia, one gains intelligence slowly over centuries, such that one has all the fun that can be had at one level but not the one above, then goes up a level, then has that level's fun, etc.

I'd handle shame-flavored incentives with tongs. It's plausible that I have an unusual degree of sensitivity on the subject, but I'm making progress on a very bad case of self-hatred and akrasia, and "is my curiosity good enough?" strikes me as a sort of self-alienation which takes focus away from paying attention to whatever you might be curious about.

"What might I be missing about this?", "How can I increase my enthusiasm for learning?", "How can I spend less time on errors while still taking on difficult projects?" seem much safer. "What am I doing to improve my life? Is it having the desired effect?" should probably be on the list.

They would study logic, probability theory, argument, scientific method, and other core tools of truth-seeking. They would inquire into epistemology, the study of knowing. They would study artificial intelligence to learn the algorithms, the math, the laws of how an ideal agent would acquire true beliefs. They would study modern psychology and neuroscience to learn how their brain acquires beliefs, and how those processes depart from ideal truth-seeking processes. And they would study how to minimize their thinking errors.

Not necessarily. Hindsight bias is likely at work here. You know that studying these fields helped you to acquire better beliefs, and so you conclude that this consequence should be obvious. But unless a curious but untrained reasoner somehow finds out that studying these fields will help them, we shouldn't expect them to study them. Why on earth would someone try to read The Logic of Science if they didn't already know that it would improve their reasoning skills?

There are a lot more genuinely curious people out there than there are rationalists. But unless those curious people happen to meet a LWer, or stumble across a link to this site, their chances of learning the benefits of studying these subjects are not great. There are a few books that might get them started (Robyn Dawes is coming to mind), but how likely is it that they're going to stumble across one of those books, especially if they aren't explicitly interested in the field of human thought already (like EY?).

I would bet that there are a lot of genuinely curious people out there who have realized that thinking is a skill. But if you were to ask them for the best way they knew of to improve that skill, would say something along the lines of "sudoku puzzles". And that's pretty sad.

I agree in part, though this excuse was stronger before Google. Now people can Google "how to think better" or "how to figure out what's true" and start looking around. One thing leads to another. Almost all the stuff I mention above is discussed in many of the textbooks on thinking and deciding — like, say, Thinking and Deciding.

If their default response to seeing a book they might want to read is 'I'm gonna buy it!!', they're doing something wrong.

(OK, maybe they don't know about that site, but searching mediafire or demonoid or something is still an option.)

Edit: (17.01.2012) Following this discussion, I conclude that 'they're very probably not optimising their reading habits for existential risk reduction' is a better choice of words here than 'they're doing something wrong.'

Is it supposed to be obvious that there's something wrong with preferring to obey the law even when doing so costs money?

Is it supposed to be obvious that there's something wrong with preferring to own physical books rather than electronic ones even when that costs money?

(It is not the purpose of this comment to make any claim about the merits of either idea beyond this: It seems to me that neither is, in fact, obvious. But, for the benefit of anyone who thinks it relevant, I happen to have both preferences; I buy a lot of books and fail to see that this indicates anything wrong with me. Of course I don't have either absolutely; I'm pretty sure that there are circumstances in which I would break the law for financial gain, and there are some books that I'm content to use in electronic form rather than paying extra for physical copies. But my default response to seeing a book I might want to read isn't exactly "I'm gonna buy it!"; if it were then my house would be physically filled with books and I would have no money left.)

I don't know on what basis you say that the expected utility loss is "effectively zero". There's a utility gain to the person who takes an illegal copy of the book instead of buying it, because they have more money that way. There's a utility loss (which I'd have thought is obviously approximately equal in general) to the people who'd have profited directly from the sale of the book: author, publisher, distributor. And then there are second-order effects, less localized and therefore harder to see and harder to assess, from (e.g.) the slightly reduced incentives for others to write, publish and sell books, the increased social acceptability of getting books in this way, etc.

It looks to me as if what we have here is: first-order effects that cancel out exactly when expressed in terms of money, and therefore probably cancel out approximately when expressed in terms of utility, and second-order effects that are hard to get a handle on but look clearly negative to me.

Could you justify your position further on this point?

As for the second claim, note that this also needs to be true to make your "doing something wrong" assertion correct -- and ought to be obvious to justify your having made it so baldly. I'm glad you agree that it isn't.

No one was claiming or suggesting that anyone should go straight from "I'd find it interesting to read that" to buying the book, without any consideration or weighing of consequences in between. So if my last sentence is equivalent to your main point, it seems to me that you were attacking a straw man.

I'd consider them second-order effects. (Note: by "second-order" here I mean something like "less direct, more diffuse, and harder to evaluate", not "smaller". I appreciate that this is a bit woolly; perhaps the distinction isn't a helpful one.)

There's a utility loss (which I'd have thought is obviously approximately equal in general) to the people who'd have profited directly from the sale of the book: author, publisher, distributor.

If I buy a car, I do not factor in the utility loss to the manufacturers of buggy whips.

And then there are second-order effects, less localized and therefore harder to see and harder to assess, from (e.g.) the slightly reduced incentives for others to write, publish and sell books, the increased social acceptability of getting books in this way, etc.

The latter effect is by far net positive, as a much larger number of people can now gain access to much greater amounts of knowledge.

Books were being written long before IPR, they will continue to be written long after IPR. Culture will not stop being produced if stripped of legal protection.

No one was claiming or suggesting that anyone should go straight from "I'd find it interesting to read that" to buying the book, without any consideration or weighing of consequences in between.

Note the comment I was replying to:

So, someone would google "how to think better", find a $38.90 book by an author they've never heard of before, and buy it without suspecting it to be self-help nonsense?

Note the entirety of my reply that you replied to:

If their default response to seeing a book they might want to read is 'I'm gonna buy it!!', they're doing something wrong.

Note the last sentence of your reply to that:

But my default response to seeing a book I might want to read isn't exactly "I'm gonna buy it!"; if it were then my house would be physically filled with books and I would have no money left.

There was absolutely no disagreement between us on that particular point; you seem to have generalised my statement far beyond what it actually said.

Also... we have wandered dangerously far into politics. (I am, ideologically at least, a supporter of the Pirate Parties.)

If I buy a car, I do not factor in the utility loss to the manufacturers of buggy whips.

So much the worse for you. (Though of course you should also factor in the utility gain to everyone who benefits from advancing technology, etc. And of course in practice one often ignores everything but the first-order effects.)

However, I was not talking about anything remotely resembling the loss to buggy whip manufacturers when you buy a car. I was referring to the elementary fact that when you pay for something, the money you lose by paying for it goes to other people; what you lose, they gain.

Books were written long before IPR

For sure, and of course I neither claimed nor implied otherwise. I claimed only that if writing and selling books becomes less profitable, that will tend to reduce the incentive to do it.

Note the entirety of my reply that you replied to

But what you quoted here was not the entirety of your reply, in an important respect: "doing something wrong" was a hyperlink to library.nu. The existence and destination of a hyperlink are an important part of the content of the sentence that contains the link, no?

we have wandered dangerously far into politics.

The fact that an issue has been taken up by a single-issue political party doesn't mean that discussing it constitutes wandering into politics. In any case, let me elaborate something I already said: I am not arguing here (1) that existing laws about "intellectual property" are any good, or (2) that it is always (or even usually) a Bad Thing to copy things illegally. I am saying only that there are not-obviously-crazy reasons why someone might prefer to pay for a physical book rather than copying an illicit electronic copy. They aren't all legal reasons, either.

However, I was not talking about anything remotely resembling the loss to buggy whip manufacturers when you buy a car. I was referring to the elementary fact that when you pay for something, the money you lose by paying for it goes to other people; what you lose, they gain.

But what you quoted here was not the entirety of your reply, in an important respect: "doing something wrong" was a hyperlink to library.nu.

Touche, I hadn't thought of that. So the entirety of my reply is:

If their default response to seeing a book they might want to read is 'I'm gonna buy it!!', they're doing something wrong. Here's how they can do it better: Pirate the book. Also, I know this awesome site where you can do exactly that...

But I still don't see how you can interpret that to mean: "There's something wrong with buying books, you should exclusively pirate them," which is what you seem to be arguing against.

The fact that an issue has been taken up by a single-issue political party doesn't mean that discussing it constitutes wandering into politics.

Semantical dispute. Whether you call it 'politics' or not, my mind recognises it as an exclusively political issue, and, as such, is already beginning to die. For instance, if I hadn't jumped directly (although without consciously intending to) to the 'put down this political opponent' mode, I might've said 'the benefits of free knowledge to millions far surpass the monetary losses to a few thousand; if you think otherwise, it's probably scope insensitivity.' Instead I said....

Do you support the damnable Buggy Whip Party, Comrade Gjm? Do you?

... I guess I need to work on that.

I am not arguing here ... that it is always (or even usually) a Bad Thing to copy things illegally. I am saying only that there are not-obviously-crazy reasons why someone might prefer to pay for a physical book rather than copying an illicit electronic copy. They aren't all legal reasons, either.

I don't know why you keep repeating that, since both of us agree perfectly about it.

I think that's simply wrong. It would be right if the only difference between you and a publishing house were that the publishing house has more money, but of course that's not so. To a rough approximation, a publishing house is made up of lots of individuals. Much of your $20 will be distributed amongst them, and if they're on average about as well off as you are then this is roughly utility-neutral. Some of the rest will go into whatever larger-scale projects the publishing house is engaged in, which make use of economies of scale to get increasing returns in utility per dollar. (That's why there are corporations.)

And, of course, some of it will go to line the pockets of already-wealthy investors and executives. I agree that that bit is likely to show diminishing returns. But I see no reason to think that a transfer of $20 from you to the publishing house is a net utility loss, and just saying "diminishing returns" certainly doesn't suffice.

To you and everyone else reading this: PM me and I'll let you use my account.

library.nu isn't much better than other free book sites like freebookspot.com. The main good thing about it is that it hosts files on ifile.it, which means that you can download as much as you want for free, and it has some books not on other ebook sites (which you can still easily get via google). On the other hand, it has no rating system (so if you search "calculus" you have 100 pages to look through), books aren't categorized well, and it has no system for book recommendations (unlike freebookspot, for instance).

In general, if you want a specific book, just google it. If you want a book but not a specific one, use an ebook site like freebookspot or library.nu.

I tried typing those queries (and related ones) into google, to see if someone could easily find some sort of starting point for rationality. "How to think better" yields many lists of tips that are mediocre at best (things like: exercise, become more curious, etc). About halfway down the page, interestingly, is a post on CSA, but it's not a great one. It seems to mostly say that to get better at thinking you first have to realize that you are not naturally a fantastic thinker. This is true, but it's not something that points the way forward towards bayesian rationality. (by the way, "how to figure out what's true" provides essentially nothing of value, at least on the first page).

In order for someone to go down the path you've identified on their own, as a curious individual, they would have to have a substantial amount of luck to get started. Either they would have to have somehow stumbled upon enough of an explanation of heuristics and biases that they realized the importance of them (which is a combination of two fairly unlikely events), or they would have to be studying those subjects for some reason other than their instrumental value. Someone who started off curiously studying AI would have a much better chance at finding this path, for this reason. AI researchers in this instance, have a tremendous advantage when it comes to rationality over researchers in the hard sciences, engineers, etc.

You're right; mediocre is not the best word for what I meant there. Humans generally function better when they exercise. But it doesn't fundamentally change the way people think. If we use a car metaphor, exercise is things like changing the oil and keeping it well tuned. It can make a big difference. But not as big of a difference as upgrading the engine.

I've added ads on Google AdWords that will start coming up for this in a couple days when the new ads get approved so that anyone searching for something even vaguely like "How to think better" or "How to figure out what's true" will get pointed at Less Wrong. Not as good as owning the top 3 spots in the organic results, but some folks click on ads, especially when it's in the top spot. And we do need to make landing on the path towards rationality less of a stroke of luck and more a matter of certainty for those who are looking.

I'm not an expert, but with this in mind it should be a rather simple matter to apply a few strategies so that LW shows up near the top of relevant search results. At the very least we could create wiki pages with titles like "How to Think Better" and "How to Figure Out What's True" with links to relevant articles or sequences. The fact that rationality has little obvious commercial value should work in our favor by keeping competing content rather sparse.

Is rationality a common enough word that people would naturally jump to it when trying to figure out how to think better? I'm not sure how often I used it before Less Wrong, but I know that it is substantially more commonplace after reading the sequences.

Do "curious" people want to learn the (already discovered) truth or to discover heretofore unknown truths? You seem to confound the two. Data on the statistical correlation between these distinct motives would be interesting, but I doubt most scientists are primarily concerned with personally accumulating true beliefs. Preparing to make contribution to human knowledge probably looks a lot different from preparing to absorb the greatest mass of truths. It probably also looks different from preparing to function rationally as far as go quotidian beliefs.

One suggests either using the default urls or Markdown, (basic guidance available from Show Help at the bottom right of the window that appears once one hits Comment. Shorteners are bad fur teh interwebz and if the characters are valuable http://ow.ly would be better.

I think he's saying ow.ly is better only in that it gives you some control over the content of the shortener. What I don't understand is 'shorteners are bad for the interwebs'—how? In what sense? Is this advice prudential or normative?

I think the "shorteners are bad" is shorthand for, "it will become hard to find information later, if the shortener service goes out of business, because the shortlinks won't lead anywhere and you will have no idea what they originally pointed to."

Besides the scraping of millions of Delicious users, a small subset of archive team has formed URL team, dedicated to pulling down the content of URL shorteners. URL shorteners may be one of the worst ideas, one of the most backward ideas, to come out of the last five years. In very recent times, per-site shorteners, where a website registers a smaller version of its hostname and provides a single small link for a more complicated piece of content within it.. those are fine. But these general-purpose URL shorteners, with their shady or fragile setups and utter dependence upon them, well. If we lose TinyURL or bit.ly, millions of weblogs, essays, and non-archived tweets lose their meaning. Instantly. To someone in the future, it'll be like everyone from a certain era of history, say ten years of the 18th century, started speaking in a one-time pad of cryptographic pass phrases. We're doing our best to stop it. Some of the shorteners have been helpful, others have been hostile. A number have died. We're going to release torrents on a regular basis of these spreadsheets, these code breaking spreadsheets, and we hope others do too.

I think you are confusing between wanting to know, and being good at it.

Imagine someone in the stone age, would you say none was genuinely curious because they didn't know about all those fields which weren't invented yet ?

Then, what about someone living in our world, but not knowing about Bayesian reasoning, AI, ... ? How can he know that those fields are fundamental to learn, to satisfy their curiosity on another field, before at least learning the basis of them ? When you don't know about Bayes' theorem, but you are curious (you really want to know the truth) about, say, ancient Rome history or about if there ever was life on Mars, what would drive you to learn probability theory ? How can you know you must learn it to learn other thing, when you don't know much about that other thing ?

Sure, if you are curious, you'll want to learn in all fields. But since we have a limited amount of time, you can't except someone to learn Bayesian reasoning, even if he's really curious, unless there is some kind of trigger that makes him realize how useful it would be to be efficient in being curious.

Genuinely wanting something and being good at doing it are not directly linked. You can't say someone isn't really wanting something just because he pursues it in an efficient way.

The stone age analogy doesn't quite fly. There's a difference between the state where I want X and someone else is offering X, and the state where nobody is offering X.

But you're right, of course, that even in the first case I have to know I want X.

That said, I don't have to know the name of the field.

For example, if I'm genuinely interested in what actually happened in ancient Rome (which is of course only one possible meaning of the phrase "ancient Rome history"), I will sooner or later discover that there are disagreements among experts about some aspects of it, as well as questions about it that are simply unanswered.

A genuinely curious person, who is actually motivated to know what actually happened in ancient Rome, will in consequence sooner or later have thoughts like "how do I decide which experts to believe?" or "how do I decide which of these competing theories is true?" or "how do I come up with answers to questions that haven't been answered yet?"

There's no particular guarantee that it will occur to them that the thing called "probability theory" or "cognitive science" is related to that (though "decision theory" seems like a reasonable thing to investigate), but asking those questions cogently enough of the Internet, often enough, will sooner or later get them there, assuming they don't settle for something along the way that doesn't actually satisfy their curiosity but gives them some other thing instead.

I think you overestimate the ease it is to "jump to the meta-level" (ie, you want to learn about something, so you jump to learning how to learn) to people who were not pointed to do it - by reading Gödel, Escher, Bach, some of LW or anything like that. Someone genuinely curious about "what actually happened in ancient Rome" will read lots of books about it, will go to visit the ruins, go to museums, ... but won't spontaneously start asking about "decision theory" or about "what is the general process to resolve dispute between scholars ?" if not given strong hints that they should do it.

In practice, though, the reason people don't do this isn't because they couldn't do it with, say, a year or four of consistent asking, receiving answers, evaluating those answers for whether they actually resolve their genuine curiosity, discarding those answers which don't, and repeating the process.

The reason they don't do it is because they settle for one of the answers they get early on, and stop asking.

They wrote a private version which I discussed with them; they're working on a second version for public sharing now. I found their argument about the ways in which it's inaccurate and bad PR fairly convincing; I now think I made an error propagating this post.

I'm not sure there's an overarching "curiosity" that people have or don't have: I'm very curious about whether a specific kind of database will perform adequately in certain circumstances (long story) but I'm only mildly curious about how to identify which French painter during the 19th century painted which picture. Some art experts, I'm sure, have cultivated the skill to guess within seconds which painter it is for every picture. I wouldn't mind having that skill -- it sounds like a fun skill to have -- but it seems like it would be more resources than it's worth. OTOH, I really want my probability estimations re: the database to reflect reality. Do I need to use AI theory? Doubtful. Probably a little bit of statistics, and even that fairly mild, but I do have to think a lot about how to use my knowledge of databases to design experiments to find the truth out. I'm not sure if that would look "curious" to the lay person (and, of course, there's also a factor of "signaling curiosity" -- I want to make sure that everyone with a stake in the process sees that I've done the due diligence), but nonetheless, I'm truly curious about this (and yes, it could go both ways...I think this is the most important part of curiosity vs. fake curiosity).

When I was genuinely curious about the US immigration laws applied to me (and again, it could have gone both ways -- before running any experiments, I made sure to visualize both options, and realizing I can live with both) I just called an immigration lawyer (and, for a latter question, the paralegal who was involved with my visa). In that case I needed very little knowledge from LW -- I didn't apply my knowledge about Bayes, or about heuristics and biases, and just went and asked a professional (of course, in some cases, like wanting to know if a stock will go up, asking a professional is disasterous, but with immigration law, lawyers can estimate probabilities fairly accurately even if they lack formal rationality training).

Those were the two examples of real curiosity from my life that I could think of, that looked nothing like the description here of "real curiosity"....

I think this is a question of what satisfies your curiosity. Neither of the examples you give is a paradigm example of curiosity as such - curiosity is generally taken to mean a desire for knowledge for its own sake whereas both your examples involve seeking knowledge for practical reasons - but perhaps in your case these work and personal issues are enough to satisfy your curiosity. The fact that you're here makes me think otherwise though. Surely you read LessWrong out of curiosity?

Yes, I do seek knowledge for other reasons, here and elsewhere. But my expectation that this will not "look like" curiosity because I expect to have few changes in my behavior based on what I read, and so the importance of it being "true" is likewise diminished. Sure, I would like to have my beliefs about the brain and AI be true, but I'm not prepared to spend a LOT of resources to do it -- I'm sure if I were really curious about the role of Oxytocin in relationships, I could reach true beliefs faster by spending more resources. There are gradations between "French paintings" and "database performance" in how curious I am about things, I agree, and most of Less Wrong falls somewhere in the middle of it. The curiosity Luke was alluding to is the all consuming curiosity of "things I expect belief accuracy to have large impact on my utility", which I doubt most of Less Wrong falls on.

Truth seekers should deliberately impose costs on themselves for holding false beliefs. That is, they should increase the cost of being wrong. One way to do this is to bet on your beliefs. Another way is to bond your beliefs: post a bond that you will forfeit if your prediction is wrong. Yes, imposing such costs is bothersome, but for truth seekers the tradeoff is easily worth it.

I think otherwise - most people want to have true beliefs. However, they have rather limited trust in the powers of their own logic, as the experience of school has taught them that they are often wrong. They don't have the numerical skills to embark on anything more numerically ambitious than what money requires. They expect to be wrong often, and rarely use formal reason as such. But they still want to have true beliefs, and rely mostly on intuition and experience to decide on that.

For most people, most beliefs are socially acquired - people acquire their beliefs from the people around them, and they tend to acquire large blocks of belief together. One shouldn't underestimate the sheer amount of work needed to do anything different.

Most people never create a new idea (in the sense you're talking about) in their entire lives - they have experiences, yes, and they change beliefs based on experience. But they do not regard themselves as having the basic equipment to generate ideas, or to be sophisticated in judging between them.

In the end I've come to the view that none of us can change this (well, not anytime soon at any rate). Human beings think in groups - and the best most of us can do to help others think better is to do it for them, and talk about it sometimes. Obviously there is a group of people who can do more than that, but they are a minority.

The other comment I have on your post is that all of the above is actually just one idea. Which is that the basis of all knowledge about the world is inductive reasoning, and, in principle, all such reasoning should be based on the sound use of statistics. There are many mathematical ways of messing up such use, and our intuitive reasoning also messes up these stats too. If you really need the right answer, you will need to learn enough to get your statistics right, and compensate for the shortcomings of your wetware. And that's the whole idea in a nutshell.

An example. Did you know that brakes are the most dangerous piece of equipment on your car? In a staggeringly large number of accidents, the driver was using the brake at the time of the collision. Surely we could make driving safer by removing the brakes, then? Of course the thesis is ludicrous, but how many of us are confident that we wouldn't make a similar statistical mistake in a different context? But once you embark on the journey of trying to fix such problems in your own thinking, the road leads all the way to the place your post describes.

Most people think the journey is beyond them, and leave it to people like you and Eliezer to make the journey for them, and report back your findings. And unfortunately I don't think they're wrong about that.

However, they have rather limited trust in the powers of their own logic, as the experience of school has taught them that they are often wrong. They don't have the numerical skills to embark on anything more numerically ambitious than what money requires.

I believe the situation is a good bit worse than that. One of the underlying lessons of conventional schooling is "You can't be trusted to think about what you need to know."

One of the underlying lessons of conventional schooling is "You can't be trusted to think about what you need to know."

Do mean this in the sense of "you can't think be trusted to think about important things" or in the sense of "you can't be trusted to decide which things are important"? I agree with the second, and think it's what you mean, but not the first.

I'm having difficulty knowing what level of rationalist this is aimed at. Are the people you talk to every week students of rationality, or 'normal' people?

This post applies to both, I imagine. But because you talk about "people" instead of explicitly talking about people like me, it's easy to see this post as not being aimed at me. (Maybe it's not).

What I mean is: It's easy to praise oneself and one's peers by talking about people of a lower class. When I was young, it was 'dumb people', when I was a bit more sophisticated it was 'theists', when I was an Objectivist, it was 'non-Objectivists', and now that I'm a rationalist the temptation is to criticize those who "know almost nothing of logic, probability theory, argument, scientific method, epistemology, artificial intelligence, human cognitive science, or debiasing techniques." So this post, because it isn't clearly directed at people who have worked hard to do better in the ways prescribed by the Sequences, causes my semiconscious mind to ask: "is this a beginning level post, or something I should actually pay attention to?" Are you telling me to do better, or criticizing outsiders in order to promote group bonding?

Of course my rider knows I should pay attention; I must always work harder to cultivate the virtues. And I don't actually expect that you're just trying to promote group bonding. But what I really want - and what this post may not have been designed for - is a honest criticism of people you see who are a lot like me in that they have access to all the same correct memes as I have, and they have exerted effort to improve in the appropriate areas.

My target audience was meant to be ambiguous. Even the most curious people I know spend some of their time not looking very curious. My post is meant as a vignette of the human condition, like a scene from Gummo, and also (I hope) "valuably motivating (by way of shaming)."

I'm wary of being inspired by an example that I can't actually attain even through my best and most honest efforts. I would like to know -- and maybe it isn't possible to find out -- what it is about you, Luke Muehlhauser, that makes you such a persistent belief updater? At a biological level, even.

If variation in populations is a fact of life, then why wouldn't there be graduated levels of rationalist ability?

I predict you're selling yourself short. Maybe my weaknesses and shortcomings are largely filtered out if you know me only through my writings, but the people I work with every week could list them for you. There is clearly a level (or 5) above my own.

Moreover, I've been studying rationality for years, and since April have had the benefit of working on rationality or x-risk full time.

It's very hard to tell "what it is about me" that gives me the rationalist powers I do possess, but if I had to guess, the single biggest thing would be my deep desire to say oops whenever appropriate, which I suspect I got from having wasted 21 years of my life for failing to say oops about the supernatural. I don't want to waste my time like that again.

I could be selling myself short. And I'm certainly not surrounded by rationalist superheroes -- quite the opposite, in fact.

I should have been more specific. You're not so much famous here for how rational you are, but for the freakishly outlying amount of time you're capable of devoting to a particular kind of activity: studying stuff.

I wonder how much of that sort of sustained focus is just something you were born with, and whether it inclines you to activities that require such focus. Yes, you were Christian for 21 years, but you stopped being one in part because you were able to focus enough to read lots and lots of books disconfirming your beliefs -- itself an extremely unusual thing to do.

Then again, plenty of people with outlying levels of studiousness don't become rationalists. Despite some evidence to the contrary, monasteries probably aren't breeding grounds for atheism. And plenty of people with low mental energy have strong bullshit detectors.

I for one have read it as a possible criticism to what I do, exactly along the lines of "don't just look like you're curious and comment on LW and think that you're better because you share the beliefs of the coolest tribe" but go out and really do learn something...

(... so I ended up commenting on LW and thinking I'm even better than the people described above... to increase the number of relevant illustrations :P)

I am currently working my way through reading the responses to this essay, but I really liked your response and wanted to comment. I hope that is ok. You asked for "honest criticism" of people who are a lot like you; and while I realize you were asking Lukeprog I hope you don't mind if I throw my two cents in.

What I took from this essay is an idea I first encountered studying symbolic interactionism: That humans are animals with the unique capacity to act or to act. To act, in the sense that we, like all other living organisms can intentionally impact the territory we exist within. As a person I can run, fight, search for truth, etc. What makes humans unique is that in addition to this we hold the capacity to act, as in the sense of to pretend. In Tibetan Buddhism I believe this is called "shadow dancing," where you do something to mimic a form rather than become it. Luke has focused specifically in on the action of searching for truth. People can either genuinely search for truth, or they can for political reasons don the air of a truth seeker.

Now here comes your critique. The assumption people "who see it a lot like you" is that the former is in some way superior to the later. That one is necessary and the other is detrimental. it is important to genuinely seek truth, however, it is just as important to let it go and just actlike you are seeking truth. Not being able to do this is a problem of moderation. Probably more than 80% of humanity needs to learn how to act, how to be genuine in purpose. However, the elite who are already purpose driven need to learn how to balance serious action with social harmony (social harmony being what i think acting accomplishes).

I do not mean to pick a fight, but my honest criticism is you need to learn when to be irrational. What do you think?

They would study artificial intelligence to learn the algorithms, the math, the laws of how an ideal agent would acquire true beliefs.

Really? The others make sense, but it's not clear this will be useful to a human trying to learn things themselves. If I want to notice patterns, "plug all of your information into a matrix and perform eigenvector decompositions" is probably not going to get me very far.

The mathematical techniques like eigenstuff and particle methods and so on can't be directly applied by humans, but the field is still useful.

I think the big gain from AI is that you get practice in understanding and debugging mental processes, which can be applied to your own reasoning. AI theory is philosophy that's at least true if not optimally relevant.

At least for me, I've found that studying some machine learning has kind of broadened my perspectives on rationality in general. Even if we humans don't apply the algorithms that we find in machine learning textbooks ourselves, I still find it illuminating to study how we try make machines perform rational inference. The field also concerns itself with more general, if you will philosophical questions relating to e.g. how to properly evaluate the performance of predictive agents, the trade-off between model complexity and generality and the issue of overfitting. These kind of questions are very general in nature and should probably be of some interest to students of any kind of learning agents, be they human or machine.

True in a way: for example, emulating a planning algorithm in your mind is a terribly inefficient way of making decisions. However, in order to understand the concept of "how an algorithm feels from inside", you need to think of yourself too as an algorithm, which is (I guess) very hard if you have no idea how agents like you might work at all.

So, as I see it, AI gives you a better grasp of "map vs. territory". Compared to "the map is the equations, the territory is what I see" you get "my mind is also a map, so where I see a pattern, maybe there is none". (See confirmation bias.)