Stances are very simple, and don’t require any specific beliefs or practices. No one explicitly promotes them. You pick them up automatically from our cultural “thought soup.” They are the ways people talk about meaning in soap operas and cafes.

Confused stances are insidious, because they are unnoticed. Because no one argues for them, no one argues against them. They are memes, mental viruses that people propagate by talking, without awareness of them.

And my position on the larger problem of meaning is to notice that my life always seems really meaningful and great when I have coffee. If I’m going to try to figure out what the actual meaning of life is, in some sort of deep principled way, I’m going to do it with as much attention to Truth as possible. And if I’m going to give myself some emotional hack that lets myself go on and continue finding life worth living, I think caffeine probably has fewer side effects than falsehood, and is just as effective.

And if you don’t respond to caffeine as well as I do, then I think the overall lesson is that the emotional problem of meaning is a basically biological one, that doesn’t connect with the philosophical problem of meaning nearly as much as you think. Get a good psychiatrist and you’ll solve the emotional problem. The philosophical problem might not be solvable, but “helping others” or “creating a positive singularity” or “[your ingroup’s political goals here]” are, though not Perfectly Objectively Grounded, grounded enough that most people don’t really want to question them once the emotional problem is solved.

I find some of Peterson’s non-truth-value-having writing effective in the same sense as caffeine; it makes me more emotionally willing to follow the truths I know I should be following. Since, jokes aside, I can’t literally be drugged 100% of the time, I appreciate that. And maybe the drug would be stronger if I were to swallow his truth-value-having claims too. But that’s not a risk-benefit profile I’m okay with right now.

All rituals look to have been “clearcut” in the modern world, because few rituals are well-adapted to the new technological human reality. But this new reality may also be seen as an island: a pristine space, unoccupied by past rituals and very leisurely by historical standards, where sensory exploitation selection may flourish: rituals may serve the emotional and aesthetic needs of humans more than ever before, because they are under fewer constraints. Only a tiny percentage of the population is now needed for the production of food, fuel, and other necessities; selection for collective action in unpleasant areas has been dramatically relaxed. There is more room for arbitrary beauty.

When I see other people making a big deal out of seemingly-minor problems, I’m in this weird superposition between thinking I’ve avoided them so easily I missed their existence, or fallen into them so thoroughly I’m like the fish who can’t see water.

And when I see other people struggling to understand seemingly-obvious concepts, I’m in this weird superposition between thinking I’m so far beyond them that I did it effortlessly, or so far beneath them that I haven’t even realized there’s a problem.

Yesterday I reviewed a good summary of Kegan’s developmental stages, and since then, I’ve found myself every few hours waffling between “I’m so metasystematic I can’t even remember Stage 4” and “I’ve never learned how to cope with systems”.

It’s a lot to ask, but there’s one thing we should all do every time we make a claim 1) that is debatable and 2) whose truth value matters to us and our audience.

We should state the level of truth we’re speaking on—in effect, where our claim lives in the world of claims, and how wide or narrow its implications are. This is something I suspect we already do implicitly and occasionally—but I sincerely think we’d be better off if we made a habit out of doing it in plain, explicit terms.

What do I mean here? A good starting point, to which I’ve referred before, is the central idea in Meaningness, that all things (ideas, objects, living beings, you name it) are characterized by both nebulosity and pattern. Applying this to truth—or let’s just say, the truth value of a statement like “John’s grandmother is from Ohio”—it should be apparent that all truths are relative, but non-arbitrary.

We could add that with given the way information is structured “out there”, some truths refer to a wider set of things than other truths—and on, and on. We might intuitively grasp this as there being multiple levels of truth, ranging from the most total (verging on absolute, but never quite getting there), to the most specific and contingent.

So yes, John’s grandmother is from Ohio in the sense that people identify with the places where they grew up, and John’s grandmother just told us that she grew up in Ohio. But our definition of “from Ohio” might change—do we use it to mean “born in Ohio”? “Spent more than 5 years in Ohio”? “Lives in Ohio currently”? “Once made a pit stop in Ohio”? “Is made of molecules with origins in Ohio air and soil”? “Is made of molecules with origins in Ohio air and soil dating back past the Ice Age”?

This is a case where the commonly understood meaning of “from Ohio” wouldn’t vary too much in practice—but clearly there’s some wiggle room. But in no case is it infinite.

When it comes to contestable claims that draw on multiple realms of knowledge and mutual understanding, there’s just enough wiggle room that people will often talk right past each other, unaware of how incommensurate their ideas of what a claim means or the scope of the claim actually are.

This happens all the time in conversations with my brother about politics and society—unsurprising, given that conversations about politics and society include both huge, abstract, unverifiable claims and tiny, tactical, testable claims, and also given that my brother and I both agree on plenty and disagree on plenty.

It turns out that when we agree, we’re usually referring to a common set of claims that we understand in roughly the same way. When we disagree, though, we often find out—45 minutes in, say—that one of us is making claims on one hierarchical level, with X set of implications, and the other is making claims on another hierarchical level, with Y set of implications. We probably ended up making the arguments that went easiest with our gut convictions, but now at least we make more sense to each other.

To that end, I’ve gotten into the habit of prefacing my arguments with (maybe I should postface instead), “Now, I’m talking on the level of aggregate social good / metaphysics / political messaging when I say this, but…”

—

A good analogue is Slate Star Codex’s habit of labeling his conjectural posts with “Epistemic status: X” at the top. In fact, we can kind of think of “truth level” as orthogonal to “epistemic status”—when you’re parsing an argument, the scope and hierarchical context of a claim matters big time, just like the author’s degree of confidence in that claim.

But I guess they’re not entirely orthogonal. There’s a pretty obvious general pattern in which lower-level, more topical claims can be more easily assigned high confidence, and the highest-level, most abstract and systematic claims can only be made with so much confidence. Clearly communicating these variables, and this tendency, is crucial to healthy thinking.

A closing example of why:
Toward the end of a very arduous, very beautiful serotonergic experience, I called up a person I love who happens to believe in God, and who’d known me as an atheist for many years.

I told him that it no longer made sense for me to identify as an atheist, because of the depth of wonder and absolute doubt I’d developed in reference to Existence/The Universe/Ultimate Reality. Because from an honest human perspective, all bets are off.

Not a hard theist himself, he appreciated my take and said he could relate. It is also worth noting that this person is a Republican, and in the moment, he extended the logic of my cosmic doubt to the question of man-made global climate change. Something like: This is why these scientists who think they know why the climate is changing are so out of their depth.

In that moment, I had to remind him: we can’t even begin to compare claims about The Nature of Ultimate Reality can’t even begin to be with claims about the earth’s climate. The difference in scale between Universe and earth (10 followed by 20 zeroes, by the most conservative measure) means that they’re on entirely different levels. Which means that while one is impossible to make total sense of, the other can be carved at the joints rather neatly.

As societies became literate (and then hyperliterate), “as if” thinking jumped out of its box, and useful fictions proliferated. Only the old forms of “as if” thinking, the shameful remnants of an ignorant past that we think of as “magical thinking,” were tabooed.

Magical thinking “confuses” the relationship between symbol and referent, between mind and world. Our modern world, to confuse matters even more, is mostly made of minds. As societies became literate (and then hyperliterate), “as if” thinking jumped out of its box, and useful fictions proliferated. Only the old forms of “as if” thinking, the shameful remnants of an ignorant past that we think of as “magical thinking,” were tabooed.