Mixed Reference: The Great Reductionist Project

Take the uni­verse and grind it down to the finest pow­der and sieve it through the finest sieve and then show me one atom of jus­tice, one molecule of mercy.

- Death, in Hog­father by Terry Pratchett

Med­i­ta­tion: So far we’ve talked about two kinds of mean­ingful­ness and two ways that sen­tences can re­fer; a way of com­par­ing to phys­i­cal things found by fol­low­ing pinned-down causal links, and log­i­cal val­idity by com­par­i­son to mod­els pinned-down by ax­ioms. Is there any­thing else that can be mean­ingfully talked about? Where would you find jus­tice, or mercy?

… … …

Sup­pose that I pointed at a cou­ple of piles of ap­ples on a table, a pile of two ap­ples and a pile of three ap­ples.

And lo, I said: “If we took the num­ber of ap­ples in each pile, and mul­ti­plied those num­bers to­gether, we’d get six.”

Nowhere in the phys­i­cal uni­verse is that ‘six’ writ­ten—there’s nowhere in the laws of physics where you’ll find a float­ing six. Even on the table it­self there’s only five ap­ples, and ap­ples aren’t fun­da­men­tal. Or to put it an­other way:

Take the ap­ples and grind them down to the finest pow­der and sieve them through the finest sieve and then show me one atom of six­ness, one molecule of mul­ti­pli­ca­tion.

Nor can the state­ment be true as a mat­ter of pure math, com­par­ing to some Pla­tonic six within a math­e­mat­i­cal model, be­cause we could phys­i­cally take one ap­ple off the table and make the state­ment false, and you can’t do that with math.

This ques­tion doesn’t feel like it should be very hard. And in­deed the an­swer is not very difficult, but it is worth spel­ling out; be­cause cases like “jus­tice” or “mercy” will turn out to pro­ceed in a similar fash­ion.

Nav­i­gat­ing to the six re­quires a mix­ture of phys­i­cal and log­i­cal refer­ence. This case be­gins with a phys­i­cal refer­ence, when we nav­i­gate to the phys­i­cal ap­ples on the table by talk­ing about the cause of our ap­ple-see­ing ex­pe­riences:

Next we have to call the stuff on the table ‘ap­ples’. But how, oh how can we do this, when grind­ing the uni­verse and run­ning it through a sieve will re­veal not a sin­gle par­ti­cle of ap­ple­ness?

This part was cov­ered at some length in the Re­duc­tion­ism se­quence. Stan­dard physics uses the same fun­da­men­tal the­ory to de­scribe the flight of a Boe­ing 747 air­plane, and col­li­sions in the Rel­a­tivis­tic Heavy Ion Col­lider. Nu­clei and air­planes al­ike, ac­cord­ing to our un­der­stand­ing, are obey­ing spe­cial rel­a­tivity, quan­tum me­chan­ics, and chro­mo­dy­nam­ics.

We also use en­tirely differ­ent mod­els to un­der­stand the aero­dy­nam­ics of a 747 and a col­li­sion be­tween gold nu­clei in the RHIC. A com­puter mod­el­ing the aero­dy­nam­ics of a 747 may not con­tain a sin­gle to­ken, a sin­gle bit of RAM, that rep­re­sents a quark. (Or a quan­tum field, re­ally; but you get the idea.)

So is the 747 made of some­thing other than quarks? And is the state­ment “this 747 has wings” mean­ingless or false? No, we’re just mod­el­ing the 747 with rep­re­sen­ta­tional el­e­ments that do not have a one-to-one cor­re­spon­dence with in­di­vi­d­ual quarks.

Similarly with ap­ples. To com­pare a men­tal image of high-level ap­ple-ob­jects to phys­i­cal re­al­ity, for it to be true un­der a cor­re­spon­dence the­ory of truth, doesn’t re­quire that ap­ples be fun­da­men­tal in phys­i­cal law. A sin­gle dis­crete el­e­ment of fun­da­men­tal physics is not the only thing that a state­ment can ever be com­pared-to. We just need truth con­di­tions that cat­e­go­rize the low-level states of the uni­verse, so that differ­ent low-level phys­i­cal states are in­side or out­side the men­tal image of “some ap­ples on the table” or al­ter­na­tively “a kit­ten on the table”.

Now we can draw a cor­re­spon­dence from our image of dis­crete high-level ap­ple ob­jects, to re­al­ity.

Next we need to count the ap­ple-ob­jects in each pile, us­ing some pro­ce­dure along the lines of go­ing from ap­ple to ap­ple, mark­ing those already counted and not count­ing them a sec­ond time, and con­tin­u­ing un­til all the ap­ples in each heap have been counted. And then, hav­ing counted two num­bers, we’ll mul­ti­ply them to­gether. You can imag­ine this as tak­ing the phys­i­cal state of the uni­verse (or a high-level rep­re­sen­ta­tion of it) and run­ning it through a se­ries of func­tions lead­ing to a fi­nal out­put:

And of course op­er­a­tions like “count­ing” and “mul­ti­pli­ca­tion” are pinned down by the num­ber-ax­ioms of Peano Arith­metic:

And we shouldn’t for­get that the image of the table, is be­ing calcu­lated from eyes which are in causal con­tact with the real table-made-of-par­ti­cles out there in phys­i­cal re­al­ity:

And then there’s also the point that the Peano ax­ioms them­selves are be­ing quoted in­side your brain in or­der to pin down the ideal mul­ti­plica­tive re­sult—af­ter all, you can get mul­ti­pli­ca­tions wrong—but I’m not go­ing to draw the image for that one. (We tried, and it came out too crowded.)

So long as the math is pinned down, any table of two ap­ple piles should yield a sin­gle out­put when we run the math over it. Con­strain­ing this out­put con­strains the pos­si­ble states of the origi­nal, phys­i­cal in­put uni­verse:

And thus “The product of the ap­ple num­bers is six” is mean­ingful, con­strain­ing the pos­si­ble wor­lds. It has a truth-con­di­tion, fulfilled by a mix­ture of phys­i­cal re­al­ity and log­i­cal val­idity; and the cor­re­spon­dence is nailed down by a mix­ture of causal refer­ence and ax­io­matic pin­point­ing.

I usu­ally sim­plify this to the idea of “run­ning a log­i­cal func­tion over the phys­i­cal uni­verse”, but of course the small pic­ture doesn’t work un­less the big pic­ture works.

The Great Re­duc­tion­ist Pro­ject can be seen as figur­ing out how to ex­press mean­ingful sen­tences in terms of a com­bi­na­tion of phys­i­cal refer­ences (state­ments whose truth-value is de­ter­mined by a truth-con­di­tion di­rectly cor­re­sp­nd­ing to the real uni­verse we’re em­bed­ded in) and log­i­cal refer­ences (valid im­pli­ca­tions of premises, or el­e­ments of mod­els pinned down by ax­ioms); where both phys­i­cal refer­ences and log­i­cal refer­ences are to be de­scribed ‘effec­tively’ or ‘for­mally’, in com­putable or log­i­cal form. (I haven’t had time to go into this last part but it’s an already-pop­u­lar idea in philos­o­phy of com­pu­ta­tion.)

And the Great Re­duc­tion­ist Th­e­sis can be seen as the propo­si­tion that ev­ery­thing mean­ingful can be ex­pressed this way even­tu­ally.

But it some­times takes a whole bunch of work.

And to no­tice when some­body has sub­tly vi­o­lated the Great Re­duc­tion­ist Th­e­sis—to see when a cur­rent solu­tion is not de­com­pos­able to phys­i­cal and log­i­cal refer­ence—re­quires a fair amount of self-sen­si­ti­za­tion be­fore the trans­gres­sions be­come ob­vi­ous.

Ex­am­ple: Coun­ter­fac­tu­als.

Con­sider the fol­low­ing pair of sen­tences, widely used to in­tro­duce the idea of “coun­ter­fac­tual con­di­tion­ing”:

The first sen­tence seems agree­able—John F. Kennedy definitely was shot, his­tor­i­cally speak­ing, so if it wasn’t Lee Har­vey Oswald it was some­one. On the other hand, un­less you be­lieve the Illu­mi­nati planned it all, it doesn’t seem par­tic­u­larly likely that if Lee Har­vey Oswald had been re­moved from the equa­tion, some­body else would’ve shot Kennedy in­stead.

Which is to say that sen­tence (A) ap­pears true, and sen­tence (B) ap­pears false.

One of the his­tor­i­cal ques­tions about the mean­ing of causal mod­els—in fact, of causal as­ser­tions in gen­eral—is, “How does this so-called ‘causal’ model of yours, differ from as­sert­ing a bunch of statis­ti­cal re­la­tions? Okay, sure, these statis­ti­cal de­pen­den­cies have a nice neigh­bor­hood-struc­ture, but why not just call them cor­re­la­tions with a nice neigh­bor­hood-struc­ture; why use fancy terms like ‘cause and effect’?”

And one of the most widely en­dorsed an­swers, in­clud­ing nowa­days, is that causal mod­els carry an ex­tra mean­ing be­cause they tell us about coun­ter­fac­tual out­comes, which or­di­nary statis­ti­cal mod­els don’t. For ex­am­ple, sup­pose this is our causal model of how John F. Kennedy got shot:

Roughly this is in­tended to con­vey the idea that there are no Illu­mi­nati: Kennedy causes Oswald to shoot him, does not cause any­body else to shoot him, and causes the Moon land­ing; but once you know that Kennedy was elected, there’s no cor­re­la­tion be­tween his prob­a­bil­ity of caus­ing Oswald to shoot him and his prob­a­bil­ity of caus­ing any­one else to shoot him. In par­tic­u­lar, there’s no Illu­mi­nati who mon­i­tor Oswald and send an­other shooter if Oswald fails.

In any case, this di­a­gram also im­plies that if Oswald hadn’t shot Kennedy, no­body else would’ve, which is mod­ified by a coun­ter­fac­tual surgery a.k.a. the do(.) op­er­a­tor, in which a node is sev­ered from its former par­ents, set to a par­tic­u­lar value, and its de­scen­dants then re­com­puted:

And so it was claimed that the mean­ing of the first di­a­gram is em­bod­ied in its im­plicit claim (as made ex­plicit in the sec­ond di­a­gram) that “if Oswald hadn’t shot Kennedy, no­body else would’ve”. This state­ment is true, and if all the other im­plicit coun­ter­fac­tual state­ments are also true, the first causal model as a whole is a true causal model.

What’s wrong with this pic­ture?

Well… if you’re strict about that whole com­bi­na­tion-of-physics-and-logic busi­ness… the prob­lem is that there are no coun­ter­fac­tual uni­verses for a coun­ter­fac­tual state­ment to cor­re­spond-to. “There’s ap­ples on the table” can be true when the par­ti­cles in the uni­verse are ar­ranged into a con­figu­ra­tion where there’s some clumps of or­ganic molecules on the table. What ar­range­ment of the par­ti­cles in this uni­verse could di­rectly make true the state­ment “If Oswald hadn’t shot Kennedy, no­body else would’ve”? In this uni­verse, Oswald did shoot Kennedy and Kennedy did end up shot.

But it’s a sub­tle sort of thing, to no­tice when you’re try­ing to es­tab­lish the truth-con­di­tion of a sen­tence by com­par­i­son to coun­ter­fac­tual uni­verses that are not mea­surable, are never ob­served, and do not in fact ac­tu­ally ex­ist.

Be­cause our own brains carry out the same sort of ‘coun­ter­fac­tual surgery’ au­to­mat­i­cally and na­tively—so na­tively that it’s em­bed­ded in the syn­tax of lan­guage. We don’t say, “What if we perform coun­ter­fac­tual surgery on our mod­els to set ‘Oswald shoots Kennedy’ to false?” We say, “What if Oswald hadn’t shot Kennedy?” So there’s this coun­ter­fac­tual-sup­po­si­tion op­er­a­tion which our brain does very quickly and in­visi­bly to imag­ine a hy­po­thet­i­cal non-ex­is­tent uni­verse where Oswald doesn’t shoot Kennedy, and our brain very rapidly re­turns the sup­po­si­tion that Kennedy doesn’t get shot, and this seems to be a fact like any other fact; and so why couldn’t you just com­pare the causal model to this fact like any other fact?

And in one sense, “If Oswald hadn’t shot Kennedy, no­body else would’ve” is a fact; it’s a mixed refer­ence that starts with the causal model of the ac­tual uni­verse where there are ac­tu­ally no Illu­mi­nati, and pro­ceeds from there to the log­i­cal op­er­a­tion of coun­ter­fac­tual surgery to yield an an­swer which, like ‘six’ for the product of ap­ples on the table, is not ac­tu­ally pre­sent any­where in the uni­verse. But you can’t say that the causal model is true be­cause the coun­ter­fac­tu­als are true. The truth of the coun­ter­fac­tu­als has to be calcu­lated from the truth of the causal model, fol­lowed by the im­pli­ca­tions of the coun­ter­fac­tual-surgery ax­ioms. If the causal model couldn’t be ‘true’ or ‘false’ on its own, by di­rect com­par­i­son to the ac­tual real uni­verse, there’d be no way for the coun­ter­fac­tu­als to be true or false ei­ther, since no ac­tual coun­ter­fac­tual uni­verses ex­ist.

So that busi­ness of coun­ter­fac­tu­als may sound like a rel­a­tively ob­scure ex­am­ple (though it’s go­ing to play a large role in de­ci­sion the­ory later on, and I ex­pect to re­visit it then) but it sets up some even larger points.

For ex­am­ple, the Born prob­a­bil­ities in quan­tum me­chan­ics seem to talk about a ‘de­gree of re­al­ness’ that differ­ent parts of the con­figu­ra­tion space have (pro­por­tional to the in­te­gral over squared mod­u­lus of that ‘world’).

Could the Born prob­a­bil­ities be ba­sic—could there just be a ba­sic law of physics which just says di­rectly that to find out how likely you are to be in any quan­tum world, the in­te­gral over squared mod­u­lus gives you the an­swer? And the same law could’ve just as eas­ily have said that you’re likely to find your­self in a world that goes over the in­te­gral of mod­u­lus to the power 1.99999?

But then we would have ‘mixed refer­ences’ that mixed to­gether three kinds of stuff—the Schrod­inger Equa­tion, a de­ter­minis­tic causal equa­tion re­lat­ing com­plex am­pli­tudes in­side a con­figu­ra­tion space; log­i­cal val­idi­ties and mod­els; and a law which as­signed fun­da­men­tal-de­gree-of-re­al­ness a.k.a. mag­i­cal-re­al­ity-fluid. Mean­ingful state­ments would talk about some mix­ture of phys­i­cal laws over par­ti­cle fields in our own uni­verse, log­i­cal val­idi­ties, and de­gree-of-re­al­ness.

This is just the same sort of prob­lem if you say that causal mod­els are mean­ingful and true rel­a­tive to a mix­ture of three kinds of stuff, ac­tual wor­lds, log­i­cal val­idi­ties, and coun­ter­fac­tu­als, and log­i­cal val­idi­ties. You’re only sup­posed to have two kinds of stuff.

Peo­ple who think qualia are fun­da­men­tal are also try­ing to build refer­ences out of at least three differ­ent kinds of stuff: phys­i­cal laws, logic, and ex­pe­riences.

An­thropic prob­lems similarly re­volve around a mys­te­ri­ous de­gree-of-re­al­ness, since pre­sum­ably when you make more copies of peo­ple, you make their ex­pe­riences more an­ti­ci­pate-able some­how. But this doesn’t say that an­thropic ques­tions are mean­ingless or in­co­her­ent. It says that since we can only talk about an­thropic prob­lems us­ing three kinds of stuff, we haven’t finished Do­ing Re­duc­tion­ism to it yet. (I have not yet en­coun­tered a claim to have finished Re­duc­ing an­throp­ics which (a) ends up with only two kinds of stuff and (b) does not seem to im­ply that I should ex­pect my ex­pe­riences to dis­solve into Boltz­mann-brain chaos in the next in­stant, given that if all this talk of ‘de­gree of re­al­ness’ is non­sense, there is no way to say that phys­i­cally-lawful copies of me are more com­mon than Boltz­mann brain copies of me.)

Or to take it down a notch, naive the­o­ries of free will can be seen as ob­vi­ously not-com­pleted Re­duc­tions when you con­sider that they now con­tain physics, logic, and this third sort of thingy called ‘choices’.

And—alas—mod­ern philos­o­phy is full of ‘new sorts of stuff’; we have modal re­al­ism that makes pos­si­bil­ity a real sort of thing, and then other philoso­phers ap­peal to the truth of state­ments about con­ceiv­abil­ity with­out any at­tempt to re­duce con­ceiv­abil­ity into some mix­ture of the ac­tu­ally-phys­i­cally-real-in-our-uni­verse and log­i­cal ax­ioms; and so on, and so on.

But lest you be tempted to think that the cor­rect course is always to just en­vi­sion a sim­pler uni­verse with­out the ex­tra stuff, con­sider that we do not live in the ‘naive un-free uni­verse’ in which all our choices are con­strained by the malev­olent out­side hand of physics, leav­ing us as slaves—re­duc­ing choices to physics is not the same as tak­ing a naive model with three kinds of stuff, and delet­ing all the ‘choices’ from it. This is con­fus­ing the pro­ject of get­ting the gnomes out of the haunted mine, with try­ing to un­make the rain­bow. Coun­ter­fac­tual surgery was even­tu­ally given a for­mal and log­i­cal defi­ni­tion, but it was a lot of work to get that far—causal mod­els had to be in­vented first, and be­fore then, peo­ple could only wave their hands fran­ti­cally in the air when asked what it meant for some­thing to be a ‘cause’. The over­all moral I’m try­ing con­vey is that the Great Re­duc­tion­ist Pro­ject is difficult; it’s not a mat­ter of just pro­claiming that there’s no gnomes in the mine, or that rain­bows couldn’t pos­si­bly be ‘su­per­nat­u­ral’. There are all sorts of state­ment that were not origi­nally, or are presently not ob­vi­ously de­com­pos­able into phys­i­cal law plus logic; but that doesn’t mean you just give up im­me­di­ately. The Great Re­duc­tion­ist Th­e­sis is that re­duc­tion is always pos­si­ble even­tu­ally. It is nowhere writ­ten that it is easy, or that your prior efforts were enough to find a solu­tion if one ex­isted.

Con­tinued next time with jus­tice and mercy (or rather, fair­ness and good­ness). Be­cause clearly, if we end up with mean­ingful moral state­ments, they’re not go­ing to cor­re­spond to a com­bi­na­tion of physics and logic plus moral­ity.

Be­cause Teg­mark 4 isn’t main­stream enough yet to get it down to one.

If there is a way to re­duce it to zero or not is one dis­cov­ery I’m much look­ing for­ward to, but there prob­a­bly isn’t. It cer­tainly seems to­tally im­pos­si­ble, but that only re­ally means “I can’t think of a way to do it”.

It does in­deed seem pos­si­ble that in the long run we’ll end up with one kind of stuff, ei­ther from the re­duc­tion of logic to physics, or the re­duc­tion of physics to math. It’s also worth not­ing that my pre­sent model does have mag­i­cal-re­al­ity-fluid in it, and it’s con­ceiv­able that this will end up not be­ing re­duced. But the ac­tual ar­gu­ment is some­thing along the lines of, “We got it down to two crisp things, and all the pro­pos­als for three don’t have the crisp na­ture of the two.”

That seems to me more like an ir­re­ducible string of meth­ods of in­ter­pre­ta­tion. You have physics, whether you like it or not. If you want to un­der­stand the physics, you need math. And to use the math, you need logic. Physics it­self does not re­quire math or logic. We do, if we want to do any­thing use­ful with it. So it’s not so much “re­ducible” as it is “in­ter­pretable”—physics is such that turn­ing it into a bunch of num­bers and wacky sym­bols ac­tu­ally makes it more un­der­stand­able. But to draw from your ex­am­ple, you can’t have a phys­i­cal table with phys­i­cally in­finite ap­ples sit­ting on it. Yet you can do math with in­fini­ties, but all the math in the world won’t put more ap­ples on that table.

Just as men­tal gym­nas­tics, what if in­stead we would be able to re­duce physics and logic to mag­i­cal re­al­ity fluid? :)

Any­way, for the “logic from physics” camp the work of Valentin Turchin seems in­ter­est­ing (above all “The cy­ber­netic foun­da­tion of math­e­mat­ics”). Also of no­tice the re­cent foun­da­tional pro­gram called “Uni­va­lent foun­da­tion”.

Well, since no­body have done that yet, we can­not be sure, but for ex­am­ple a re­duc­tion of logic to physics could look like this: “for a sys­tem built on top of this set of physics laws, this is the set of log­i­cal sys­tem available to it”, which would im­ply that all the ax­io­matic sys­tem we use are only those ac­cessible via our laws of physics. For an ex­treme sem­i­nal ex­am­ple, Tur­ing ma­chine with in­finite time have a very differ­ent no­tion of “effec­tive pro­ce­dure”.

or even build up a sys­tem on top of phys­i­cal laws with­out us­ing logic?

It’s clear that such a demon­stra­tion needs to use some kind of logic, but I think that doesn’t un­der­mine the (pos­si­ble) re­duc­tion: if you show that the (set of) logic available to a sys­tem de­pends on the phys­i­cal laws, you have shown that our own logic is de­ter­mined by our own laws. This would en­tail that (pos­si­bly) differ­ent laws would have granted us differ­ent log­ics.
I’m fas­ci­nated for ex­am­ple by the fact that the con­cept of “sec­ond or­der ar­ith­meti­cal truth” (SOAT) is in­a­cessible by effec­tive finite com­pu­ta­tion, but there are space-times that al­low for in­finite com­pu­ta­tion (and so sys­tem in­hab­it­ing such a world could pos­si­bly grasp effec­tively SOATs).

That leaves in­for­ma­tion.
Large en­sem­bles con­tain very lit­tle over­all in­for­ma­tion be­cause it takes lit­tle in­for­ma­tion to spec­ify them, eg: “ev­ery real num­ber”. How­ever, they can still seem com­pli­cated from the in­side. An ul­ti­mate en­sem­ble plau­si­bly con­tains no in­for­ma­tion be­cause there is no need to pin­point it in Every­thingPos­si­bleS­pace.

How­ever, it is not clear that level IV is gen­eral enough, since the ex­is­tence of non math­e­mat­i­cal thin­gies is not ob­vi­oulsy im­pos­si­ble.

EY’s made a kind of ar­gu­ment that you should have two kinds of stuff (al­though I still think the log­i­cal pin­point­ing stuff is a bit weak), but he seems to be pro­ceed­ing as if he’d shown that that was ex­haus­tive. For all the ar­gu­ments he’s given so far, this third post could have been en­ti­tled “Ex­pe­riences: the Third Kind of Stuff”, and it would be con­sis­tent with what he’s already said.

So yeah, we need an ar­gu­ment for; “You’re only sup­posed to have two kinds of stuff.”

So yeah, we need an ar­gu­ment for; “You’re only sup­posed to have two kinds of stuff.”

I think the whole point of “the great re­duc­tion­ist pro­ject” is that we don’t re­ally have a suffi­ciency the­o­rem, so we should treat “no more than two” as an em­piri­cal hy­poth­e­sis and pro­ceed to dis­cover its truth by the meth­ods of sci­ence.

We only ac­cess mod­els via ex­pe­riences. If you aren’t will­ing to re­duce mod­els to ex­pe­riences, why are you will­ing to re­duce the phys­i­cal world of ap­ples and au­to­mo­biles to ex­pe­riences? You’re already as­sert­ing a kind of pos­i­tivis­tic du­al­ism; I see no rea­son not to posit a third do­main, the phys­i­cal, to cor­re­spond to our con­crete ex­pe­riences, just as you’ve posited a ‘model do­main’ (cf. Frege’s third realm) to cor­re­spond to our ab­stract ex­pe­riences.

Agreed. The num­ber two is ridicu­lous and can’t ex­ist. Once you al­low stuff to have a phys­i­cal kind and a log­i­cal kind, what’s to stop you from adding other kinds like de­gree-of-re­al­ness and Bud­dha-na­ture?

OTOH, log­i­cal ab­strac­tions stead­fastly re­fuse to be re­duced to physics. There may be hope for the other way around, a solu­tion to “Why does stuff ex­ist?” that makes the uni­verse some­how nec­es­sary. (Egan’s “con­scious minds find them­selves” is cute but im­plies ei­ther chaotic ob­ser­va­tions or some­thing to get the minds started.) But we can’t be very op­ti­mistic.

I don’t get it. Okay, ob­vi­ously our uni­verse is a math­e­mat­i­cal struc­ture, that’s why physics works. “All math is real” is se­duc­tive, but “All com­putable math is real, but there are no or­a­cles” is just weird; why would you ex­pect that with­out ex­per­i­men­tal ev­i­dence of Church-Tur­ing?

The idea that since there are twice as many in­finite strings con­tain­ing “1010” than “10100″, the former must ex­ist twice as much as the lat­ter nicely ex­plains why our uni­verse is so sim­ple. But I’m not at all con­vinced that uni­verses like ours with sta­ble ob­servers are sim­pler than pseu­do­ran­dom gen­er­a­tors that pop out Boltz­mann brains.

That all math is “real” in some sense you ob­serve di­rectly any time you do any. The in­sight is not that math is MORE real than pre­vi­ously thought, but just that there isn’t some ad­di­tional find of re­al­ness. Sort of, this is an over­sim­plifi­ca­tion.

Com­bine this with the simu­la­tion hy­poth­e­sis; a uni­verse can only simu­late less com­pu­ta­tion­ally ex­pen­sive uni­verses. (Of course this is hand­wavy and barely an ar­gu­ment, but it’s pos­si­ble some­thing stronger could be con­structed along these lines. I do think that much more work needs to be done here.)

I’m pretty sure Eliezer’s ap­proach is the op­po­site of Teg­mark’s. For Teg­mark, the math is real and our phys­i­cal world emerges from it, or is an image of part of it. For Eliezer, our world, in all its thick, visceral, spa­tiotem­po­ral glory, is the Real, and log­i­cal, math­e­mat­i­cal, coun­ter­fac­tual, moral, men­tal­iz­ing, es­sen­tial­iz­ing, and oth­er­wise ab­stract rea­son­ing is a hu­man in­ven­tion that hap­pens to be use­ful be­cause its rules are pre­cisely and con­sis­tently defined. There’s much less ur­gency to pro­duc­ing a re­duc­tive ac­count of math­e­mat­i­cal rea­son­ing when you’ve never reified ‘num­ber’ in the first place.

Of course, that’s not to deny that some­thing like Teg­mark’s view (per­haps a sim­pler ver­sion, Game-of-Life-style or re­stricted to a very small sub­set of pos­si­bil­ity-space that hap­pens to be causally struc­tured) could be true. But if such a view ends up be­ing true, it will provide a re­duc­tion of ev­ery­thing we know to some­thing else; it won’t be likely to help at all in re­duc­ing high-level hu­man con­cepts like num­ber or qualia or pos­si­bil­ity di­rectly to Some­thing Else. For or­di­nary re­duc­tive pur­poses, it’s physics or bust.

My best vul­gariza­tion, which I hope not to be a ra­tio­nal­iza­tion (read: Look­ing for more ev­i­dence that it is!), is that Phys­i­cal kinds of stuff are about what is, while log­i­cal kinds of stuff are about “what they do”.

If you have one lone par­ti­cle¹ in an empty uni­verse, there’s only the one kind, the phys­i­cal. The par­ti­cle is there. Once you have two par­ti­cles, the phys­i­cal kind of stuff is about how they are, their de­scrip­tion, while the log­i­cal stuff is about the ax­iom “these two par­ti­cles in­ter­act”—and ev­ery­thing that de­rives from there, such as “how” they in­ter­act².

I do not see any room for more kinds of stuff that is nec­es­sary in or­der to fully and perfectly simu­late all the states of the en­tire uni­verse where these two par­ti­cles ex­ist. I also don’t see how adding more par­ti­cles is go­ing to change that in any man­ner. As per the ev­i­dence we have, it seems ex­tremely likely that our own uni­verse is a ver­sion of this uni­verse with sim­ply more par­ti­cles in it.

So re­ally, you can re­duce it to “one”, if you’re will­ing to hy­per-re­duce the con­cep­tual fun­da­men­tal “is” to the sim­ple log­i­cal “do”—if you posit that a sin­gle par­ti­cle in a sep­a­rate uni­verse sim­ply does not ex­ist, be­cause the only ex­is­tence of a par­ti­cle is its in­ter­ac­tion, and there­fore in­ter­ac­tions are the only thing that do ex­ist. Then the dis­tinc­tion be­tween the phys­i­cal and log­i­cal be­comes merely one of lev­els of ab­strac­tion, AFAICT, and can the­o­ret­i­cally be done away with. How­ever, the phys­i­cal-log­i­cal two-rule seems to be use­ful, and the above seems ex­tremely easy to mis­in­ter­pret or con­fuse with other things.

Defined as what­ever is the most fun­da­men­tally re­duced small­est pos­si­ble unit of the uni­verse, be that a point in a wave field equa­tion, a quark, or any­thing else re­al­ity runs on.

I’ve read some the­o­ries (and thought some of my own) im­ply­ing that there is no real “how” of in­ter­ac­tion, and that all the in­ter­ac­tions are sim­ply the sim­plest, most prim­i­tive pos­si­ble kind of log­i­cal in­ter­ac­tion, the re­veal-ex­is­tence func­tion or some­thing similar, and that from this func­tion de­rive as ab­strac­tions all the phe­nom­ena we ob­serve as “forces” or “kinds of in­ter­ac­tions” or “trans­mis­sions of in­for­ma­tion”. How­ever, all such the­o­ries I’ve read are in­com­plete and also lack ex­per­i­men­tal ver­ifi­a­bil­ity. They do sound much sim­pler and more el­e­gant, though.

How does EY know there are only two? Is it aprori knowl­edge? Is it em­piri­cal? Is it sub­ject to falsifi­ca­tion? How many failed re­duic­tions-to-two-kinds-of-stuff do there have to be be­fore TKoS is falsified?

After it’s been right the last 300 times or so, we should as­sess a sub­stan­tial prob­a­bil­ity that it will be wrong be­fore the 1,000th oc­ca­sion, but be­lieve much more strongly that it will be cor­rect on the next oc­ca­sion.

Okay. I’ll bet with some­where around 50% prob­a­bil­ity that the Great Re­duc­tion­ist Pro­ject as I’ve de­scribed it works, with re­duc­tion to a sin­gle thing count­ing as suc­cess, and re­quiring mag­i­cal re­al­ity-fluid count­ing as failure. I’ll bet with 95% prob­a­bil­ity that it’s right on the next oc­ca­sion for an­throp­ics and mag­i­cal re­al­ity-fluid, and with 99+ prob­a­bil­ity that it’s right on the next oc­ca­sion for things that con­fuse me less; ex­cept that when it comes to e.g. free will, I don’t know who I’d ac­cept as a judge that didn’t think the is­sue already set­tled.

Either the Great Re­duc­tion­ist Th­e­sis (“ev­ery­thing mean­ingful can be ex­pressed by [physics+logic] even­tu­ally”) is it­self ex­press­ible with physics+logic (even­tu­ally) or it isn’t. If it is, then it might be true.

If it isn’t, then the great re­duc­tion­ist the­sis is not true, be­cause the propo­si­tion it ex­presses is not mean­ingful. I’m wor­ried about this pos­si­bil­ity be­cause the phrase ‘ev­ery­thing mean­ingful’ strikes me as dan­ger­ously self-refer­en­tial.

Let me first say that I am grate­ful to Esar and Rob­bBB for hav­ing this dis­cus­sion, and dou­ble-grate­ful to Rob­bBB for steel­man­ning my ar­gu­ments in a very proper and rea­son­able fash­ion, es­pe­cially con­sid­er­ing that I was in fact care­less in talk­ing about “mean­ingful propo­si­tions” when I should’ve re­mem­bered that a propo­si­tion, as a term of art in philos­o­phy, is held to be a mean­ing-bearer by defi­ni­tion.

I’m also sorry about that “is mean­ingless is false” phrase, which I’m cer­tain was a typo (and a very UNFORTUNATE typo) - I’m not quite sure what I meant by it origi­nally, but I’m guess­ing it was sup­posed to be “is mean­ingless or false”, though in the con­text of the larger de­bate now that I’ve read it, I would just say “col­or­less green ideas sleep fu­ri­ously” is “mean­ingless” rather than false. In a strict sense, mean­ingless ut­ter­ances aren’t propo­si­tions so they can’t be false. In a looser sense, an ut­ter­ance like “Maybe we’re liv­ing in an in­con­sis­tent set of ax­ioms!” might be im­pos­si­ble to ren­der co­her­ent un­der strict stan­dards of mean­ing, while also be­ing col­lo­quially called ‘false’ mean­ing ‘not ac­tu­ally true’ or ‘mis­taken’.

I’m com­ing at this from a rather differ­ent an­gle than a lot of ex­ist­ing philos­o­phy, so let me do my best to clar­ify. First, I would like to dis­t­in­guish the ques­tions:

R1) What sort of things can be real?

R2) What thoughts do we want an AI to be able to rep­re­sent, given that we’re not cer­tain about R1?

A (sub­jec­tively un­cer­tain prob­a­bil­is­tic) an­swer to R1 may be some­thing like, “I’m guess­ing that only causal uni­verses can be real, but they can be con­tin­u­ous rather than dis­crete, and in that sense aren’t limited to math­e­mat­i­cal mod­els con­tain­ing a finite num­ber of el­e­ments, like finite Life boards.”

The an­swer to R2 may be some­thing like, “How­ever, since I’m not sure about R1, I would also like my AI to be able to rep­re­sent the pos­si­bil­ity of a uni­verse with Time-Turn­ers, even though, in this case, the AI would have to use some gen­er­al­iza­tion of causal refer­ence to re­fer to the things around it, since it wouldn’t live in a uni­verse that runs on Pearl-style causal links.”

In the stan­dard sense of philos­o­phy, ques­tion R2 is prob­a­bly the one about ‘mean­ing’ or which as­ser­tions can be ‘mean­ingful’, al­though ac­tu­ally the amount of philos­o­phy done around this is so volu­mi­nous I’m not sure there is a stan­dard sense of ‘mean­ing’. Philoso­phers some­times try to get mileage out of claiming things are ‘con­ceiv­able’, e.g., the philo­soph­i­cal catas­tro­phe of the sup­posed con­ceiv­abil­ity of P-zom­bies, and I would em­pha­size even at this level that we’re not try­ing to get R1-mileage out of things be­ing in R2. For ex­am­ple, there’s no rule fol­low­ing from any­thing we’ve said so far that an R2-mean­ingful state­ment must be R1-pos­si­ble, and to be par­tic­u­lar and spe­cific, want­ing to con­ser­va­tively build an AI that can rep­re­sent Con­way’s Game of Life + Time-Turn­ers, still al­lows us to say things like, “But re­ally, a uni­verse like that might be im­pos­si­ble in some ba­sic sense, wihch is why we don’t live there—to speak of our pos­si­bly liv­ing there may even have some deeply buried in­co­her­ence rel­a­tive to the real rules for how things re­ally have to work—but since I don’t know this to be true, as a mat­ter of my own mere men­tal state, I want my AI to be able to rep­re­sent the pos­si­bil­ity of time-travel.” We might also imag­ine that a non-log­i­cally-om­ni­scient AI needs to have an R2 which can con­tain in­con­sis­tent ax­iom sets the AI doesn’t know to be in­con­sis­tent.

For things to be in R2, we want to show how a self-mod­ify­ing AI could carry out its func­tions while hav­ing such a rep­re­sen­ta­tion, which in­cludes, in par­tic­u­lar, be­ing able to build an offspring with similar rep­re­sen­ta­tions, while be­ing able to keep track of the cor­re­spon­dence be­tween those offspring’s quoted rep­re­sen­ta­tions and re­al­ity. For ex­am­ple, in the tra­di­tional ver­sion of P-zom­bies, there’s a prob­lem with ‘if that was true, how could you pos­si­bly know it?’ or ‘How can you be­lieve your offspring’s rep­re­sen­ta­tion is con­ju­gate to that part of re­al­ity, when there’s no way for it to main­tain a cor­re­spon­dence us­ing causal refer­ences?’ This is the prob­lem of a SNEEZE_VAR in the Ma­trix where we can’t talk about whether its value is 0 or 1 be­cause we have no way to make “0” or “1″ re­fer to one bi­nary state rather than the other.

Since the prob­lems of R2 are the AI-con­ju­gates of prob­lems of refer­ence, des­ig­na­tion, main­tainance of a co­her­ent cor­re­spon­dence, etcetera, they fall within the realm of prob­lems that I think tra­di­tional philos­o­phy con­sid­ers to be prob­lems of mean­ing.

I would say that in hu­man philos­o­phy there should be a third is­sue R3 which arises from our dual de­sire to:

Not do that awful thing wherein some­body claims that only causal uni­verses can be real and there­fore your hy­pothe­ses about Time-Turn­ers are mean­ingless noises.

Not do that awful thing wherein some­body claims that since P-zom­bies are “con­ceiv­able” we can know a pri­ori that con­scious­ness is a non-phys­i­cal prop­erty.

In other words, we want to avoid the twin er­rors of (1) pre­emp­tively shoot­ing down some­body who is mak­ing an hon­est effort to talk to us by claiming that all their words are mean­ingless noises, and (2) try­ing to ex­tract info about re­al­ity just by virtue of hav­ing an ut­ter­ance ad­mit­ted into a de­bate, turn­ing a given inch into a taken mile.

This leads me to think that hu­man philoso­phers should also have a third cat­e­gory R3:

R3) What sort of ut­ter­ances can we ar­gue about in English?

which would roughly rep­re­sent what sort of things ‘feel mean­ingful’ to a flawed hu­man brain, in­clud­ing things like P-zom­bies or “I say that God can make a rock so heavy He can’t lift it, and then He can lift it!”—ad­mit­ting some­thing into R3 doesn’t mean it’s log­i­cally pos­si­ble, co­her­ent, or ‘con­ceiv­able’ in some rigor­ous sense that you could then ex­tract mileage from, it just means that we can go on hav­ing a con­ver­sa­tion about it for a while longer.

When some­body comes to us with the P-zom­bie story, and claims that it’s “con­ceiv­able” and they know this on ac­count of their brain feel­ing able to con­ceive it, we want to re­ply, “That’s what I would call ‘ar­guable’ (R3) and if you try to treat your in­tu­itions about ar­gua­bil­ity as data, they’re only di­rectly data about which English sen­tences hu­man brains can af­firm. If you want to es­tab­lish any stronger sense of co­her­ence that you could get mileage from, such as co­her­ence or log­i­cal pos­si­bil­ity or refer­ence-abil­ity, you’ll have to ar­gue that sep­a­rately from your brain’s di­rect ac­cess to the mere af­firma­bil­ity of a mere English ut­ter­ance.”

At the same time, you’re not shov­ing them away from the table like you would “col­or­less green ideas sleep up with­out clam any”; you’re ac­tu­ally go­ing to have a con­ver­sa­tion about P-zom­bies, even though you think that in stric­ter senses of mean­ing like R2, the con­ver­sa­tion is not just false but mean­ingless. After all, you could’ve been wrong about that non­mem­ber­ship-in-R2 part, and they might be about to ex­plain that to you.

The Great Re­duc­tion­ist Th­e­sis is about R1 - the ques­tion of what is ac­tu­ally real—but it’s difficult to have some­thing that lies in a re­duc­tion­ist’s con­cept of a strict R2, turn out to be real, such that the Great Re­duc­tion­ist Th­e­sis is falsified. For ex­am­ple, if we think R1 is about causal uni­verses, and then it turns out we’re in Time­travel Life, the Great Re­duc­tion­ist Th­e­sis has been con­firmed, be­cause Time­travel Life still has a for­mal log­i­cal de­scrip­tion. Just about any­thing I can imag­ine mak­ing a Tur­ing-com­putable AI re­fer to will, if real, con­firm the Great Re­duc­tion­ist Th­e­sis.

So is GRT philo­soph­i­cally vac­u­ous from be­ing philo­soph­i­cally un­falsifi­able? No: to take an ex­treme case, sup­pose we have an un­com­putable and non-log­i­cally-ax­iom­a­ti­z­able sen­sus div­ina­tus en­abling us to di­rectly know God’s ex­is­tence, and by bap­tiz­ing an AI we could give it this sen­sus div­ina­tus in some way in­te­grated into the rest of its mind, mean­ing that R2, R1, and our own uni­verse all in­clude things referrable-to only by a sen­sus div­ina­tus. Then ar­guable ut­ter­ances along the lines of, “Some things are in­her­ently mys­te­ri­ous”, would have turned out, not just to be in R2, but to ac­tu­ally be true; and the Great Re­duc­tion­ist Th­e­sis would be false—con­trary to my cur­rent be­lief that such ut­ter­ances are not only col­lo­quially false, but even mean­ingless for strict senses of mean­ing. But one is not li­censed to con­clude any­thing from my hav­ing al­lowed a sen­sus div­ina­tus to be a brief topic of con­ver­sa­tion, for by that I am not com­mit­ting to ad­mit­ting that it was strictly mean­ingful un­der strong crite­ria such as might be pro­posed for R2, but only that it stayed in R3 long enough for a hu­man brain to say some in­for­mal English sen­tences about it.

Does this mean that GRT it­self is merely ar­guable—that it talks about an ar­gu­ment which is only in R3? But tau­tolo­gies can be mean­ingful in GRT, since logic is within “physics + logic”. It looks to me like a com­pleted the­ory of R2 should be some­thing like a log­i­cal de­scrip­tion of a class of uni­verses and a class of rep­re­sen­ta­tions cor­re­spond­ing to them, which would it­self be in R2 as pure math; and the the­ory-of-R1 “Real­ity falls within this class of uni­verses” could then be phys­i­cally true. How­ever, many in­for­mal ‘nega­tions’ of R2 like “What about a sen­sus div­ina­tus?” will only be ‘ar­guable’ in a hu­man R3, rather than them­selves be­ing in R2 (as one would ex­pect!).

R3) “What sort of ut­ter­ances can we ar­gue about in English?” is (per­haps de­liber­ately) vague. We can ar­gue about col­or­less green ideas, if noth­ing else at the lin­guis­tic level. Per­haps R3 is not about mean­ing, but about de­bate eti­quette: What are the min­i­mum stan­dards for an as­ser­tion to be taken se­ri­ously as an as­ser­tion (i.e., not as a ques­tion, in­ter­jec­tion, im­per­a­tive, glos­so­lalia, etc.). In that case, we may want to break R3 down into a num­ber of sub-ques­tions, since in differ­ent con­texts there will be differ­ent stan­dards for the ad­mis­si­bil­ity of an ar­gu­ment.

I’m not sure what ex­actly a sen­sus div­ina­tus is, or why it wouldn’t be ax­iom­a­ti­z­able. Per­haps it would help flesh out the Great Re­duc­tion­ist Th­e­sis if we eval­u­ated which of these phe­nom­ena, if any, would vi­o­late it:

Ob­jec­tive fuzzi­ness. I.e., there are en­tities that, at the ul­ti­mate level, pos­sess prop­er­ties vaguely; per­haps even some that ex­ist vaguely, that fall in differ­ent points on a con­tinuum from be­ing to non-be­ing.

Inef­fable prop­er­ties, i.e., ones that sim­ply can­not be ex­pressed in any lan­guage. The spe­cific way red­ness feels to me, for in­stance, might be a can­di­date for logico-phys­i­cal in­ex­press­ibil­ity; I can per­haps os­tend the state, but any de­scrip­tion of that state will un­der­de­ter­mine the pre­cise feel­ing.

Ob­jec­tive in­con­sis­ten­cies, i.e., di­alethe­ism. Cer­tain forms of per­spec­tivism, which rel­a­tivize all truths to an ob­server, might also yield in­con­sis­ten­cies of this sort. Note that it is a stronger claim to as­sert di­alethe­ism (an R1-type claim) than to merely al­low that rea­son­ing non-ex­plo­sively with ap­par­ent con­tra­dic­tions can be very use­ful (an R2-type claim, af­firm­ing para­con­sis­tent log­ics).

Nihilism. There isn’t any­thing.

Elimi­na­tivism about logic, in­ten­tion­al­ity, or com­pu­ta­tion. Our uni­verse lacks log­i­cal struc­ture; ba­sic op­er­a­tors like ‘and’ and ‘all’ and ‘not’ do not carve at the joints. Alter­na­tively, the pos­si­bil­ity of refer­ence is some­how de­nied; AIs can­not rep­re­sent, pe­riod. This is per­haps a stronger ver­sion of 2, on which ev­ery­thing, in spite of its seem­ing or­der­li­ness, is in some fash­ion in­ef­fable.

Are these com­pat­i­ble with GRT? What else that we can clearly ar­tic­u­late would be in­com­pat­i­ble? What about a model that is com­pletely ex­press­ible in clas­si­cal logic, but that isn’t on­tolog­i­cally ‘made of logic,’ or of physics? I in­tuit that a clas­si­cally mod­e­lable uni­verse that meta­phys­i­cally con­sists en­tirely of mind-stuff (no physics-stuff) would be a rather se­vere break from the spirit of re­duc­tive phys­i­cal­ism. But per­haps you in­tended GRT to be a much more mod­est and ac­com­mo­dat­ing claim than ev­ery­day sci­en­tific ma­te­ri­al­ism.

I have no ob­jec­tion to your de­scrip­tion of R3 - ba­si­cally it’s there so that (a) we don’t think that some­thing not im­me­di­ately ob­vi­ously be­ing in R2 means we have to kick it off the table, and (b) so that when some­body claims their imag­i­na­tion is giv­ing them veridi­cal ac­cess to some­thing, we can de­scribe the thing ac­cessed as mem­ber­ship in R3, which in turn is (and should be) too vague for any­thing else to be con­cluded thereby; you shouldn’t be able to get info about re­al­ity merely by ob­serv­ing that you can af­firm English ut­ter­ances.

In­so­far as your GRT vi­o­la­tions all seem to me to be in R3 and not R2 (i.e., I can­not yet co­her­ently imag­ine a state of af­fairs that would make them true), I’m mostly will­ing to agree that re­al­ity ac­tu­ally be­ing that way would falsify GRT and my pro­posed R2. Un­less you pick one of them and de­scribe what you mean by it more ex­actly—what ex­actly it would be like for a uni­verse to be like that, how we could tell if it were true—in which case it’s en­tirely pos­si­ble that this new ver­sion will end up in the logic-and-physics R2, and for similar rea­sons, wouldn’t falsify GRT if true. E.g., a ver­sion of “nihilism” that is cashed out as “there is no on­tolog­i­cally fun­da­men­tal re­al­ity-fluid”, de­nial of “refer­ence” in which there is no on­tolog­i­cally ba­sic de­scrip­tive­ness, elimi­na­tivism about “logic” which still cor­re­sponds to a com­putable causal pro­cess, “rel­a­tivized” de­scrip­tions along the lines of Spe­cial Rel­a­tivity, and so on.

This isn’t meant to sneak re­duc­tion­ism in side­ways into uni­verses with gen­uinely in­ef­fable magic com­posed of ir­re­ducible fun­da­men­tal men­tal en­tities with no for­mal effec­tive de­scrip­tion in logic as we know it. Rather, it re­flects the idea that even in an in­tu­itive sense, suffi­ciently ef­fable magic tends to­ward sci­ence, and since our own brains are in fact com­putable, at­tempts to cash out the in­ef­fable in greater de­tail tend to turn it ef­fable. The tra­di­tional First-Cause on­tolog­i­cally-ba­sic R3 “God” falsifies re­duc­tion­ism; but if you re­define God as a Lord of the Ma­trix, let alone as ‘nat­u­ral se­lec­tion’, or ‘the way things are’, it doesn’t. An ir­re­ducible soul falsifies GRT, un­til I in­ter­ro­gate you on ex­actly how that soul works and what it’s made of and why there’s still such a thing as brain dam­age, in which case my in­ter­ro­ga­tion may cause you to ad­just your claim and ad­just it some more and fi­nally end up in R2 (or even end up with a pat­tern the­ory of iden­tity). It should also be noted that while the ad­jec­tive “ef­fable” is in R2, the ad­jec­tive “in­ef­fable” may quite pos­si­bly be in R3 only (can you ex­hibit an in­ef­fable thing?)

I in­tuit that a clas­si­cally mod­e­lable uni­verse that meta­phys­i­cally con­sists en­tirely of mind-stuff (no physics-stuff)

What does it mean to con­sist en­tirely of mind-stuff when all the ac­tual struc­ture of your uni­verse is log­i­cal? What is the way things could be that would make that true, and how could we tell? This ut­ter­ance is not yet clearly in my R2, which doesn’t have any­thing in it to de­scribe “meta­phys­i­cally con­sists of’”. (Would you con­sider “The sub­stance of the cracker be­comes the flesh of Christ while its ac­ci­dents re­main the same” to be in your equiv­a­lent of R2, or only in your equiv­a­lent of R3?)

Ex­press­ibil­ity. Every­thing (or any­thing) that is the case can in prin­ci­ple be fully ex­pressed or oth­er­wise rep­re­sented. In other words, an AI is con­structible-in-prin­ci­ple that could model ev­ery fact, ev­ery­thing that is so. Com­pu­ta­tional power and ac­cess-to-the-data could limit such an AI’s knowl­edge of re­al­ity, but ba­sic ef­fa­bil­ity could not.

Clas­si­cal Ex­press­ibil­ity. Every­thing (or any­thing) that is the case can in prin­ci­ple be fully ex­pressed in clas­si­cal logic. In ad­di­tion to ob­jec­tive in­ef­fa­bil­ity, we also rule out ob­jec­tive fuzzi­ness, in­con­sis­tency, or ‘gaps’ in the World. (Per­haps we rule them out em­piri­cally; we may not be able to imag­ine a world where there is ob­jec­tive in­de­ter­mi­nacy, but we at least in­tuit that our world doesn’t look like what­ever such a world would look like.)

Log­i­cal Phys­i­cal­ism. The rep­re­sen­ta­tional con­tent of ev­ery true sen­tence can in prin­ci­ple be ex­haus­tively ex­pressed in terms very similar to con­tem­po­rary physics and clas­si­cal logic.

Origi­nally I thought that your Great Re­duc­tion­ist Th­e­sis was a con­junc­tion of 1 and 3, or of 2 and 3. But your re­cent an­swers sug­gest to me that for you GRT may sim­ply be Ex­press­ibil­ity (1). Irre­ducibly un­clas­si­cal truths are ruled out, not by GRT, but by the fact that we don’t seem to need to give up prin­ci­ples like Non-Con­tra­dic­tion and Ter­tium Non Datur in or­der to Speak Every Truth. And men­tal­is­tic or su­per­nat­u­ral truths are ex­cluded only in­so­far as they vi­o­late Ex­press­ibil­ity or just ap­pear em­piri­cally un­nec­es­sary.

If so, then we should be very care­ful to dis­t­in­guish your con­fi­dence in Ex­press­ibil­ity from your con­fi­dence in phys­i­cal­ism. Nei­ther, as I for­mu­lated them above, im­plies the other. And there may be good rea­son to en­dorse both views, pro­vided we can give more pre­cise con­tent to ‘terms very similar to con­tem­po­rary physics and clas­si­cal logic.’ Per­haps the eas­iest way to give some meat to phys­i­cal­ism would be to do so nega­tively: List all the clusters that do seem to vi­o­late the spirit of phys­i­cal­ism. For in­stance:

A list like this would give us some warn­ing signs that a view, even if log­i­cally speci­fi­able, may be de­vi­at­ing sharply from the sci­en­tific pro­ject. If you pre­cisely stipu­lated in log­i­cal terms how Magic works, for in­stance, but its mechanism was ex­tremely an­thro­pocen­tric (e.g., re­quiring that Latin-lan­guage phonemes ‘carve at the joints’ of fun­da­men­tal re­al­ity), that would seem to vi­o­late some­thing very im­por­tant about re­duc­tive phys­i­cal­ism, even if it doesn’t vi­o­late Ex­press­ibil­ity (i.e., we could pro­gram an AI to model mag­i­cal laws of this sort).

What does it mean to con­sist en­tirely of mind-stuff when all the ac­tual struc­ture of your uni­verse is
log­i­cal?

I’m not sure what you mean by ‘ac­tual struc­ture.’ I would dis­t­in­guish the Teg­mark-style the­sis ‘the uni­verse is meta­phys­i­cally made of logic-stuff’ from the more mod­est the­sis ‘the uni­verse is ex­haus­tively de­scrib­able us­ing purely log­i­cal terms.’ If we learned that all the prop­er­ties of billiard balls and nat­u­ral num­bers are equally speci­fi­able in set-the­o­retic terms, I think we would still have at least a lit­tle more rea­son to think that num­bers are sets than to think that billiard balls are sets.

So sup­pose we found a way to ax­io­m­a­tize ‘x be­ing from the per­spec­tive of y,’ i.e., a thought and its thinker. If we (some­how) learned that all facts are ul­ti­mately and ir­re­ducibly per­spec­ti­val (i.e., they all need an ob­server-term to be sat­u­rated), that might not con­tra­dict the ex­press­ibil­ity the­sis, but I think it would vi­o­late the spirit of phys­i­cal­ism.

(Would you con­sider “The sub­stance of the cracker be­comes the flesh of Christ while its ac­ci­dents re­main the same” to be in your equiv­a­lent of R2, or only in your equiv­a­lent of R3?)

I’m not sure. I doubt our uni­verse has ‘sub­stance-ac­ci­dent’ struc­ture, but there might be some nega­tive way to R2ify tran­sub­stan­ti­a­tion, even if (like epiphe­nom­e­nal­ism or events-out­side-the-ob­serv­able-uni­verse) it falls short of ver­ifi­a­bil­ity. Could we co­her­ently model our uni­verse as a byproduct of a cel­lu­lar au­toma­ton, while lack­ing a way to test this model? If so, then per­haps we could model ‘sub­stance-prop­er­ties’ as un­ob­serv­ables that are similarly Be­hind The Scenes, but are oth­er­wise struc­turally the same as ac­ci­dents (i.e., ob­serv­ables).

So… in my world, tran­sub­stan­ti­a­tion isn’t in R2, be­cause I can’t co­her­ently con­ceive of what a sub­stance is, apart from ac­ci­dents. For a similar rea­son, I don’t yet have R2-lan­guage for talk­ing about a uni­verse be­ing meta­phys­i­cally made of any­thing. I mean, I can say in R3 that per­haps physics is made of cheese, just like I can say that the nat­u­ral num­bers are made of cheese, but I can’t R2-imag­ine a co­her­ent state of af­fairs like that. A similar ob­jec­tion ap­plies to a log­i­cal uni­verse which is allegedly made out of men­tal stuff. I don’t know how to imag­ine a log­i­cally struc­tured uni­verse be­ing made ofany­thing.

Hav­ing Latin-lan­guage phonemes carve at the joints of fun­da­men­tal re­al­ity seems very hard, be­cause in my world Latin-lan­guage phonemes are already re­duced—there’s already se­quen­tial sound-pat­terns mak­ing them up, and the ob­vi­ous way to have a logic de­scribing the physics of such a world is to have com­plex speci­fi­ca­tions of the phonemes which are ‘carv­ing at the joints’. It’s not to­tally clear to me how to make this com­plex thing a fun­da­men­tal in­stead, though per­haps it could be man­aged via a logic con­tain­ing enough spe­cial sym­bols—but to ac­tu­ally figure out how to write out that logic, you would have to use your own neu­ron-com­posed brain in which phonemes are not fun­da­men­tal.

I do agree that—if it were pos­si­bly to rule out the Ma­trix, I mean, if spells not only work but the in­can­ta­tion is “Stu­pefy” then I know perfectly well some­one’s play­ing an S-day prank on me—that find­ing magic work would be a strong hint that the whole frame­work is wrong. If we ac­tu­ally find that prayers work, then prag­mat­i­cally speak­ing, we’ve re­ceived a hint that maybe we should shut up and listen to what the most em­piri­cally pow­er­ful priests have to say about this whole “re­duc­tion­ism” busi­ness. (I mean, that’s ba­si­cally why we’re listen­ing to Science.) But that kind of meta-level “no, you were just wrong, shut up and listen to the spiritu­al­ist” is some­thing you’d only ex­e­cute in re­sponse to ac­tu­ally see­ing magic, not in re­sponse to some­body hy­poth­e­siz­ing magic. Our abil­ity to hy­poth­e­size cer­tain situ­a­tions that would prag­mat­i­cally speak­ing im­ply we were prob­a­bly wrong about what was mean­ingful, doesn’t mean we’re prob­a­bly wrong about what’s mean­ingful. More along the lines of, “Some­body said some­thing you thought was in R3(only), but they gen­er­ated pre­dic­tions from it and those pre­dic­tions came true so bet­ter re­think your rea­sons for think­ing it couldn’t go in R2.”

With all that said, it seems to me that R3-pos­si­bil­ities falsify­ing 1, 2, or (a gen­er­al­iza­tion of 3 to other effec­tively or for­mally speci­fied physics (e.g. Time-Turn­ers)), and with the pro­viso that we’re deal­ing in sec­ond-or­der logic rather than clas­si­cal first-or­der logic, all seem to me to pretty much falsify the Great Re­duc­tion­ist Th­e­sis. Some of your po­ten­tial ex­am­ples look to me like they’re not in my R2 (e.g. men­tal facts that can’t be ex­pressed in non-men­tal terms) though I’m perfectly will­ing to dis­cuss them col­lo­quially in R3, and oth­ers seem rel­a­tively harm­less (effects which aren’t fur­ther causes of any­thing? I could write a com­puter pro­gram like that). I am hard-pressed to R2-mean­ingfully de­scribe a state of af­fairs that falsifies R1, though I can talk about it in R3.

I have an over­all agenda of try­ing to think like re­al­ity which says that I want my R1 to look as much like the uni­verse as pos­si­ble, and it’s okay to con­tem­plate re­stric­tions which might nar­row my R2 a lot rel­a­tive to some­one’s R3, e.g. to say, “I can’t seem to re­ally con­ceive of a uni­verse with fun­da­men­tally men­tal things any­more, and that’s a triumph”. So a lot of what looked to me years ago like mean­ingful non-re­duc­tion­ism, now seems more like mean­ingless non-re­duc­tion­ism rel­a­tive to my new stric­ter con­cep­tions of mean­ing—and that’s okay be­cause I’m try­ing to think less like a hu­man and more like re­al­ity.

So… in my world, tran­sub­stan­ti­a­tion isn’t in R2, be­cause I can’t co­her­ently con­ceive of what a sub­stance is, apart from ac­ci­dents.

Many math­e­mat­i­ci­ans, sci­en­tists, and philoso­phers be­lieve in things they call ‘sets.’ They be­lieve in sets partly be­cause of the ‘un­rea­son­able effec­tive­ness’ of set the­ory, partly be­cause they help sim­plify some of our the­o­ries, and partly be­cause of set the­ory’s sheer in­tu­itive­ness. But I have yet to hear any­one ex­plain to me what it means for one non-spa­tiotem­po­ral ob­ject to ‘be an el­e­ment of’ an­other. Inas­much as set the­ory is not gib­ber­ish, we un­der­stand it not through causal con­tact or ex­pe­ri­en­tial ac­quain­tance with sets, but by ex­plor­ing the the­o­ret­i­cal role these un­defined ‘set’ thin­gies over­all play (as­sisted, per­haps, by some analog­i­cal rea­son­ing).

‘Sub­stance’ and ‘ac­ci­dent’ are an­tiquated names for a very com­monly ac­cepted dis­tinc­tion: Between ob­jects and prop­er­ties. (Warn­ing: This is an over­sim­plifi­ca­tion. See The Warp and Woof of Me­ta­physics for the his­tor­i­cal ac­count.) Just as the effi­cacy of math­e­mat­ics tempts peo­ple into reify­ing the set-mem­ber dis­tinc­tion, the effi­cacy of propo­si­tional calcu­lus (or, more gen­er­ally, of hu­man lan­guage!) tempts peo­ple into reify­ing the sub­ject-pred­i­cate dis­tinc­tion. The ob­jects (or ‘sub­stances’) are what­ever we’re quan­tify­ing over, what­ever in­di­vi­d­ual(s) are in our do­main of dis­course, what­ever it is that pred­i­cates are pred­i­cated of; the prop­er­ties are what­ever it is that’s be­ing pred­i­cated.

And we don’t need to grant that it’s pos­si­ble for there to be an ob­ject with no prop­er­ties (∃x(∀P(¬P(x)))), or a com­pletely un­in­stan­ti­ated prop­erty (∃P(∀x(¬P(x)))). But once we in­tro­duce the dis­tinc­tion, Chris­ti­ans are free to try to ex­ploit it to make sense of their doc­trines. If set the­ory had ex­isted in the Mid­dle Ages, you can be sure that there would have been at­tempts to ex­pli­cate the Trinity in set-the­o­retic terms; but the silli­ness of such efforts would not nec­es­sar­ily have bled over into dele­gi­t­imiz­ing set the­ory it­self.

That said, I sym­pa­thize with your baf­fle­ment. I’m not com­mit­ted to tak­ing set-mem­ber­ship or prop­erty-bear­ing com­pletely se­ri­ously. I just don’t think ‘I can’t imag­ine what a sub­stance would be like!’ is an ad­e­quate ar­gu­ment all on its own. I’m not sure I have a clear grasp on what it means for a set to have an el­e­ment, or what it means for a num­ber line to be dense and un­countable, or what it means for my left foot to be a com­plexly-val­ued am­pli­tude; but in all these cases we can gain at least a lit­tle un­der­stand­ing, even from ini­tially un­defined terms, based on the the­o­ret­i­cal work they do. Since we rely so heav­ily on such the­o­ries, I’m much more hes­i­tant to weigh in on their mean­ingless­ness than on their ev­i­den­tial jus­tifi­ca­tion.

I don’t yet have R2-lan­guage for talk­ing about a uni­verse be­ing meta­phys­i­cally made of any­thing.

You sound like a struc­tural re­al­ist. On this view, as I un­der­stand it, we don’t have rea­son to think that our con­cep­tions straight­for­wardly map re­al­ity, but we do have rea­son to think that a rel­a­tively sim­ple and uniform trans­for­ma­tion on our map would yield a pat­tern in the ter­ri­tory.

it seems to me that R3-pos­si­bil­ities falsify­ing 1, 2, or (a gen­er­al­iza­tion of 3 to other effec­tively or for­mally speci­fied physics (e.g. Time-Turn­ers)), and with the pro­viso that we’re deal­ing in sec­ond-or­der logic rather than clas­si­cal first-or­der logic, all seem to me to pretty much falsify the Great Re­duc­tion­ist Th­e­sis.

So is this a fair char­ac­ter­i­za­tion of the Great Re­duc­tion­ist Th­e­sis?: “Any­thing that is the case can in prin­ci­ple be ex­haus­tively ex­pressed in clas­si­cal sec­ond-or­der pred­i­cate logic, rely­ing only on pred­i­cates of con­ven­tional math­e­mat­ics (iden­tity, set mem­ber­ship) and of a mod­estly en­riched ver­sion of con­tem­po­rary physics.”

We could then elab­o­rate on what we mean by ‘mod­est en­rich­ment’ if some­one found a good way to add Thor­oughly Spooky Doc­trines (du­al­ism, ideal­ism, tra­di­tional the­ism, nihilism, triv­ial­ism, in­ef­fable what­sits, etc.) into our lan­guage. Ideally, we would do this as un-ad-hocily as pos­si­ble.

I think we both agree that ‘mean­ing’ won’t ul­ti­mately carve at the joints. So it’s OK if R2 and R3 look a bit ugly; we may be elid­ing some im­por­tant dis­tinc­tions when we speak sim­ply of a ‘mean­ingful vs. mean­ingless’ bi­nary. It’s cer­tainly my own ex­pe­rience that I can in­com­pletely grasp a term’s mean­ing, and that this is be­nign pro­vided that the as­pects I haven’t grasped are ir­rele­vant to what I’m rea­son­ing about.

Can I run some­thing by you? An ar­gu­ment oc­curred to me to­day that seems sus­pect, but I don’t know what I’m get­ting wrong. The con­clu­sion of the ar­gu­ment is that GRTt en­tails GRTm. For the pur­poses of this ar­gu­ment, GRTt is the state­ment that all true state­ments have a physico-log­i­cal ex­pres­sion (mean­ing phys­i­cal, log­i­cal, or phys­i­cal+log­i­cal ex­pres­sion). GRTm is the state­ment that all true and all false state­ments have a physico-log­i­cal ex­pres­sion.

P1) All true state­ments have a physico-log­i­cal ex­pres­sion. (GRTt)

P2) The nega­tion of any false state­ment is true.

P3) If a state­ment has a physico-log­i­cal ex­pres­sion, its nega­tion has a physico-log­i­cal ex­pres­sion.

P4) All false state­ments have a physico-log­i­cal ex­pres­sion.

C) All true and all false state­ments have a phys­i­cal-log­i­cal ex­pres­sion. (GRTm)

So for ex­am­ple, sup­pose XYZ is false, and has no physico-log­i­cal ex­pres­sion. If XYZ is false, then ~XYZ is true. By GRTt, ~XYZ has a physico-log­i­cal ex­pres­sion. But if ~XYZ has a physico-log­i­cal ex­pres­sion, then ~(~XYZ), or XYZ, does. Throw­ing a nega­tion in front of a state­ment can’t change the na­ture of the state­ment qua re­ducible.

I think your ar­gu­ment works. But I can’t ac­cept GRTm; so I’ll have to ditch GRTt. In its place, I’ll give an­a­lyz­ing GRT an­other go; call this new for­mu­la­tion GRTd:

‘Every true state­ment can be de­duc­tively de­rived from the set of purely phys­i­cal and log­i­cal truths com­bined with state­ments of the se­man­tics of the non-phys­i­cal and non-log­i­cal terms.’

This is quite un­like (and no longer im­plies) GRTm, ‘Every mean­ingful state­ment is ex­press­ible in purely phys­i­cal and log­i­cal terms.’

The prob­lem for GRTt was that state­ments like ‘there are no gods’ and ‘there are no ghosts’ seem to be true, but cast in non-phys­i­cal terms; so ei­ther they are re­ducible to phys­i­cal terms (in which case both GRTt and GRTm are true), or ir­re­ducible (in which case both GRTt and GRTm are false). For GRTd, it’s OK if ‘there are no ghosts’ can’t be an­a­lyzed into strictly phys­i­cal terms, pro­vided that ‘there are no ghosts’ is en­tailed by a state­ment of what ‘ghost’ means plus all the purely phys­i­cal and log­i­cal truths.

For ex­am­ple, if part of what ‘ghost’ means is ‘some­thing non-phys­i­cal,’ then ‘there are no ghosts’ will be deriv­able from a com­plete phys­i­cal de­scrip­tion of the world pro­vided that such a de­scrip­tion in­cludes a phys­i­cal/​log­i­cal to­tal­ity fact. You list ev­ery­thing that ex­ists, then add the to­tal­ity fact ‘noth­ing ex­cept the above en­tities ex­ists;’ since the se­man­tic of ‘ghost’ en­sures that ‘ghost’ is not iden­ti­cal to any­thing on the phys­i­cal­ism list, we can then de­rive that there are no ghosts.

Note that the se­man­tic ‘bridge laws’ are them­selves en­tailed by (and, in all like­li­hood, an­a­lyz­able into) purely phys­i­cal facts about the brains of English lan­guage speak­ers.

Well done, I like GRTd es­pe­cially in that it pulls free of refer­ence to ex­press­ibil­ity and mean­ingful­ness. My only worry at the mo­ment is the to­tal­ity fact, partly be­cause of what I take EY to want from the GRT in refer­ence to R1. I take it we will agree right off that the to­tal­ity fact can’t fol­low from hav­ing listed all the physico-log­i­cal facts. Other­wise we could de­rive ‘there are no ghosts’ right now, just given the mean­ing of ‘ghost’. But we need the an­swer to the ques­tion posed by R1 to be (in ev­ery case which doesn’t in­volve a purely log­i­cal con­tra­dic­tion) an em­piri­cal an­swer. What we want to say about ghosts is not that they’re im­pos­si­ble, but that their ex­is­tence is ex­tremely un­likely given the set of physico-log­i­cal facts we do have. We won’t ever have op­por­tu­nity to de­ploy a to­tal­ity fact (since this re­quires om­ni­science, it seems), but it seems like an im­por­tant part of the ex­pres­sion of the GRTd.

But if we can’t get the to­tal­ity fact just from hav­ing listed all the physico-log­i­cal facts, and if the to­tal­ity fact must it­self be a physico-log­i­cal fact then I have a hard time see­ing how we can de­duce from physico-log­i­cal om­ni­science that there are no ghosts. In or­der to de­duce the non-ex­is­tence of ghosts, we’d need first to de­duce the to­tal­ity fact (since this is a premise in the former de­duc­tion), but if the to­tal­ity fact is not de­ducible from all the physico-log­i­cal facts, then in or­der to de­duce it, it looks like we need ‘there are no ghosts’ as a premise. But then our de­duc­tion of ‘there are no ghosts’ begs the ques­tion.

Un­less I’m miss­ing some­thing, it seems to me that the to­tal­ity fact has to end up be­ing de­ducible from all the physico-log­i­cal facts if de­duc­tions which em­ploy it are to be valid. But this again makes the GRTd (speci­fi­cally that part of it which de­scribes the to­tal­ity fact) an a pri­ori claim, which we’re try­ing to avoid es­pe­cially be­cause it means that GRTd is not an an­swer to R1 (which is what EY, at least, is look­ing for).

The to­tal­ity fact could take a num­ber of differ­ent forms. For in­stance, ‘Every­thing is a set, a space­time re­gion, a bo­son, or a fermion’ would suffice, if our se­man­tics for ‘ghost’ made it clear that ghosts are none of those things. This is why we don’t need om­ni­scient ac­cess to ev­ery ob­ject to for­mu­late the fact; all we need is a plau­si­bly finished set of gen­eral phys­i­cal cat­e­gories. If ‘phys­i­cal’ and ‘log­i­cal’ are them­selves well-defined term in our physics, we could even for­mu­late the to­tal­ity fact sim­ply as: ‘Every­thing is phys­i­cal or log­i­cal.’

Another, more mod­est to­tal­ity-style fact would be: ‘The phys­i­cal is causally closed.’ This weaker ver­sion won’t let us de­rive ‘there are no ghosts,’ but it will let us de­rive ‘ghosts, if real, have no causal effect on the phys­i­cal,’ which is pre­sum­ably what we’re most in­ter­ested in any­way.

GRTd it­self doesn’t force you to ac­cept to­tal­ity facts (also known as Porky Pig facts). But if you re­ject these strange facts, then you’ll end up need­ing ei­ther to af­firm GRTm too, or need­ing to find some way to ex­press nega­tive ex­is­ten­tial facts about Spooky Things in your pris­tine phys­i­cal/​log­i­cal lan­guage. All three of these ap­proaches have their costs, but I think GRTd is the most mod­est op­tion, since it doesn’t com­mit us to any se­ri­ous spec­u­la­tion about the limits of se­man­tics or trans­lata­bil­ity.

I take it we will agree right off that the to­tal­ity fact can’t fol­low from hav­ing listed all the physico-log­i­cal facts.

I think the to­tal­ity fact is a phys­i­cal (or ‘mixed’) fact. In­tu­itively, it’s a fact about our world that it doesn’t ‘keep go­ing’ past a cer­tain point.

it seems to me that the to­tal­ity fact has to end up be­ing de­ducible from all the physico-log­i­cal facts if de­duc­tions which em­ploy it are to be valid

The to­tal­ity fact can’t be strictly de­duced from any other fact. In all cases these to­tal­ity facts are em­piri­cal in­fer­ences from the ap­par­ent abil­ity of our phys­i­cal pred­i­cates to ac­count for ev­ery­thing. Inas­much as we are con­fi­dent that (cat­e­gory-wise) ‘That’s all, folks,’ we are con­fi­dent in there be­ing no more cat­e­gories, and hence (if only im­plic­itly) in there be­ing no Spooky ad­denda.

No­tice this doesn’t com­mit us to say­ing that we can mean­ingfully talk about Spooky non­phys­i­cal en­tities. All it com­mits us to is the claim that if we can mean­ingfully posit such en­tities, then we should re­ject them with at least as much con­fi­dence as we af­firm the to­tal­ity fact.

So, I like GRTd, in­so­far as it cap­tures both what is so plau­si­ble about phys­i­cal­ism, and in­so­far as the ‘to­tal­ity fact’ ex­presses an im­por­tant kind of em­piri­cal in­fer­ence: from even a small sub­set of all the physico-log­i­cal facts, we can get a good gen­eral pic­ture of how the uni­verse works, and what kinds of things are real.

I still have ques­tions about the GRTd as a prin­ci­ple how­ever. I don’t see how the fol­low­ing three state­ments are con­sis­tant with one an­other:

S1) GRTd: ‘Every true state­ment can be de­duc­tively de­rived from the set of purely phys­i­cal and log­i­cal truths com­bined with state­ments of the se­man­tics of the non-phys­i­cal and non-log­i­cal terms.’

S2) The to­tal­ity fact is true.

S3) ‘The to­tal­ity fact can’t be strictly de­duced from any other fact.’

One of these three has to go, and I strongly sus­pect I’ve mi­s­un­der­stood S3. So my ques­tion is this: Given all the phys­i­cal and log­i­cal facts, com­bined with state­ments of the se­man­tics of any non-phys­i­cal and non-log­i­cal terms one might care to make use of, do you think we could de­duce the to­tal­ity fact?

The to­tal­ity fact is one of the phys­i­cal/​log­i­cal facts, and can be ex­pressed in purely phys­i­cal/​log­i­cal terms. For in­stance, in a toy uni­verse where the only prop­er­ties were P (‘be­ing a par­ti­cle’) and C (‘be­ing a space­time point’), the to­tal­ity fact would have the form ∀x(P(x) ∨ C(x)) to ex­clude other cat­e­gories of en­tity. A more com­plete to­tal­ity fact would ex­clude bonus par­ti­cles and space­time points too, by as­sert­ing ∀x(x=a ∨ x=b ∨ x=c...), where {a,b,c...} is the (per­haps trans­finitely large) set of par­ti­cles and points. You can also ex­press the same idea us­ing ex­is­ten­tial quan­tifi­ca­tion.

S1, S2, and S3 are all cor­rect, pro­vided that the to­tal­ity fact is purely phys­i­cal and log­i­cal. (Ob­vi­ously, any phys­i­cal/​log­i­cal fact fol­lows triv­ially from the set of all phys­i­cal/​log­i­cal facts.) GRTd says noth­ing about which, if any, phys­i­cal/​log­i­cal facts are deriv­able from a proper sub­set of the phys­i­cal/​log­i­cal. (It also says noth­ing about whether there are non-physi­colog­i­cal truths; it only de­nies that, if there are some, their truth or false­hood can fail to rest en­tirely on the phys­i­cal/​log­i­cal facts.)

A sin­gle gi­ant to­tal­ity fact would do the job, but you could also re­place it (or in­tro­duce re­dun­dancy) by posit­ing a large num­ber of smaller to­tal­ity facts. Sup­pose you want to define a sim­ple clas­si­cal uni­verse in which a 2x2x2-inch cube ex­ists. You can quan­tify over a spe­cific 2x2x2-inch re­gion of space, and as­sert that each of the points within the in­ter­val is oc­cu­pied. But that only posits an ob­ject that’s at least that large; we also need to define the empty space around it, to give it a definite bor­der. A to­tal­ity fact (or a small army of them) could give you the req­ui­site bor­der, es­tab­lish­ing ‘there’s no more cube’ in the same way that the Gi­ant To­tal­ity Fact es­tab­lishes ‘there’s no more re­al­ity.’ But if you get a kick out of par­si­mony or con­ci­sion, you don’t need to do this again and again for each new bounded ob­ject you posit. In­stead, you can stick to pos­i­tive as­ser­tions un­til the very end, and then clean up af­ter your­self with the Gi­ant To­tal­ity Fact. That there’s no more re­al­ity than what you’ve de­scribed, af­ter all, im­plies (among other things) that there’s no more cube.

(Ob­vi­ously, any phys­i­cal/​log­i­cal fact fol­lows triv­ially from the set of all phys­i­cal/​log­i­cal facts.)

Ah, I took GRTd to mean that ‘ev­ery true state­ment (in­clud­ing all phys­i­cal and log­i­cal truths) can be de­duc­tively de­rived from the set of purely phys­i­cal and log­i­cal truths (ex­clud­ing the one to be de­rived)...’.Thus, if the to­tal­ity fact is true, then it should be deriv­able from the set of all physico-log­i­cal facts (ex­clud­ing the to­tal­ity fact). Is that right, or have I mi­s­un­der­stood GRTd?

I may, I think, just be over­es­ti­mat­ing what it takes to plau­si­bly posit the to­tal­ity fact: i.e. you may just mean that we can have a lot of con­fi­dence in the to­tal­ity fact just by hav­ing as broad and co­her­ent a view of the uni­verse as we ac­tu­ally do right now. The to­tal­ity fact may be false, but its sup­ported in gen­eral by the pre­dic­tive power of our the­o­ries and an ap­par­ent lack of spooky phe­nom­ena. If we had all the physico-log­i­cal facts, we could be su­per duper con­fi­dent in the to­tal­ity fact, as con­fi­dent as we are about any­thing. It would by no means fol­low de­duc­tively from the set of all physico-log­i­cal facts, but it’s not that sort of claim any­way. Is that right?

The edit is fine. Let me add that ‘the’ to­tal­ity fact may be a mis­lead­ing lo­cu­tion. Nearly ev­ery model that can be an­a­lyzed fact­wise con­tains its own to­tal­ity fact, and which model we’re in will change what the ‘to­tal­ity’ is, hence what the shape of the to­tal­ity fact is.

We can be con­fi­dent that there is at least one fact of this sort in re­al­ity, sim­ply be­cause triv­ial­ism is false. But GRTd does con­strain what that fact will have to look like: It will have to be purely log­i­cal and phys­i­cal, and/​or deriv­able from the purely log­i­cal and phys­i­cal truths. (And the only thing we could de­rive a Big To­tal­ity Fact from would be other, smaller to­tal­ity facts like ‘there’s no more square,’ plus a sec­ond-or­der to­tal­ity fact.)

I didn’t in­tend for you to read ‘(ex­clud­ing the one to be de­rived)’ into the state­ment. The GRTd I had in mind is a lot more mod­est, and al­lows for to­tal­ity facts and a richer va­ri­ety of causal re­la­tions.

GRTd isn’t a tau­tol­ogy (un­less GRTm is true), be­cause if there are log­i­cally un­der­iv­able non­phys­i­cal and non­log­i­cal truths, then GRTd is false. ‘X can be de­rived from the con­junc­tion of GRTd with X’ is a tau­tol­ogy, but an in­nocu­ous one, since it leaves open the pos­si­bil­ity that ‘X’ on its lone­some is a gar­den-va­ri­ety con­tin­gent fact.

I think that what you think are coun­terex­am­ples to GRTm are a large num­ber of things which, ex­am­ined care­fully, would end up in R3-only, and not in R2.

I fur­ther­more note that you just re­jected GRTt, which sounds scar­ily like con­clud­ing that ac­tual non-re­duc­tion­ist things ex­ist, be­cause you didn’t want to ac­cept the con­clu­sion that talk of non-phys­i­cal ghosts might fail strict qual­ifi­ca­tions of mean­ing. How could you pos­si­bly get there from here? How could your thoughts about what’s mean­ingful, en­tail that the laws of physics must be other than what we’d pre­vi­ously ob­served them to be? Shouldn’t reach­ing that con­clu­sion re­quire like a par­ti­cle ac­cel­er­a­tor or some­thing?

Alter­na­tively, per­haps your re­jec­tion of GRTt isn’t in­tended to en­tail that non-re­duc­tion­ist things ex­ist. If so, can you con­strue a nar­rower ver­sion of GRTt which just says that, y’know, non-re­duc­tion­ist thin­gies don’t ex­ist? And then would Esar’s ar­gu­ment not go through for this ver­sion?

I think Esar’s ar­gu­ment mainly runs into trou­ble when you want to call R3-state­ments ‘false’, in which case their nega­tions are col­lo­quially true but in R3-only be­cause there’s no strictly co­her­ent and mean­ingful (R2) way to de­scribe what doesn’t ex­ist (i.e. non-phys­i­cal ghosts). If your de­sire to ap­ply this lan­guage de­mands that you con­sider these R3-state­ments mean­ingful, then you should re­ject GRTm, I sup­pose—though not be­cause you dis­agree with me about what stric­ter stan­dards en­tail, but be­cause you want the word “mean­ingful” to ap­ply to looser stan­dards. How­ever, get­ting from there to re­ject­ing R1 is a se­vere prob­lem—though from the de­scrip­tion, it’s pos­si­ble you don’t mean by GRTt what I mean by R1. I am a bit wor­ried that you might want ‘non-phys­i­cal ghosts don’t ex­ist’ to be true, hence mean­ingful, hence its nega­tion to also be mean­ingful, hence a propo­si­tion, hence there to be some state of af­fairs that could cor­re­spond to non-phys­i­cal ghosts ex­ist­ing, hence for the uni­verse to not be shaped like my R1. Which would be a very strange con­clu­sion to reach start­ing from the premise that it’s ‘true’ that ‘ghosts do not ex­ist’.

you just re­jected GRTt, which sounds scar­ily like con­clud­ing that ac­tual non-re­duc­tion­ist things exist

To re­ject GRTt is to af­firm: “Some truths are not ex­press­ible in phys­i­cal-and/​or-log­i­cal terms.” Does that im­ply that ir­re­ducibly non­phys­i­cal things ex­ist? I don’t quite see why. My ini­tial thought is this: I am much more con­fi­dent that phys­i­cal­ism is true than that non­phys­i­cal­ism is in­ex­press­ible or mean­ingless. But if this phys­i­cal­ism I have such faith in en­tails that non­phys­i­cal­ism is in­ex­press­ible, then ei­ther I should be vastly more con­fi­dent that non­phys­i­cal­ism is mean­ingless, or vastly less con­fi­dent that phys­i­cal­ism is true, or else GRTt does not cap­ture the in­tu­itively very plau­si­ble heart of phys­i­cal­ism. Maybe GRTt and GRTm are cor­rect; but that would take a lot of care­ful ar­gu­men­ta­tion to demon­strate, and I don’t want to hold phys­i­cal­ism it­self hostage to GRTm. I don’t want a dis­proof of GRTm to over­turn the en­tire pro­ject of re­duc­tive phys­i­cal­ism; the pro­ject does not hang on so thin a thread. So GRTd is just my new at­tempt to ar­tic­u­late why our broadly nat­u­ral­is­tic, broadly sci­en­tific world-view isn’t wholly pred­i­cated on our con­fi­dence in the mean­ingless­ness of the as­ser­tions of the Other Side.

This dis­pute is over whether, in a phys­i­cal uni­verse, we can make sense of any­one even be­ing able to talk about any­thing non-phys­i­cal. Four is­sues com­pli­cate any quick at­tempts to af­firm GRTm:

1) Mean­ing it­self is pre­sum­ably non­fun­da­men­tal. Without a clear un­der­stand­ing of ex­actly what is neu­rolog­i­cally in­volved when a brain makes what we call ‘rep­re­sen­ta­tions,’ at­tempts to weigh in on what can and can’t be mean­ingful will be some­what spec­u­la­tive. And since mean­ing is non­fun­da­men­tal, truth is also non­fun­da­men­tal, is re­ally an an­thro­polog­i­cal and lin­guis­tic cat­e­gory more than a meta­phys­i­cal one; so sac­ri­fic­ing GRTt may not be as dev­as­tat­ing as it ini­tially seems.

2) ‘Log­i­cal pin­point­ing’ com­pli­cates our the­ory of refer­ence. Num­bers are ab­stracted from ob­served reg­u­lar­i­ties, but we never come into causal con­tact with num­bers them­selves; yet we seem to be able to talk about them. So if there is some way to ab­stract away from phys­i­cal­ity it­self, per­haps ‘ghost’ could be an ex­am­ple of such ab­strac­tion (albeit of a less be­nign form than ‘num­ber’). The pos­si­bil­ity doesn’t seem to­tally crazy to me.

3) It re­mains very un­clear ex­actly what work is be­ing done by ‘phys­i­cal’ (and, for that mat­ter, ‘log­i­cal’) in our for­mu­la­tions of GRT. This is es­pe­cially prob­le­matic be­cause it doesn’t mat­ter. We can define ‘phys­i­cal’ how­ever we please, and then it will be much eas­ier to work out whether we can talk about any­thing non­phys­i­cal.

One worry is that if we can’t speak of any­thing non­phys­i­cal, then the term ‘phys­i­cal’ it­self risks fal­ling into mean­ingless­ness. GRTd doesn’t face this prob­lem, and al­lows us to take the in­tu­itive route of sim­ply as­sert­ing the false­hood of anti-phys­i­cal­isms; it lets us do what we origi­nally wanted with ‘phys­i­cal­ism,’ which was to sift out the ex­ces­sively Spooky doc­trines at the out­set. In con­trast, it’s not clear what use­ful work ‘phys­i­cal­ism’ is do­ing if we fol­low the GRTm ap­proach. If GRTm’s phys­i­cal­ism is a doc­trine at all, it’s a very strange (and per­haps tau­tolo­gous) one.

4) Tra­di­tion­ally, there’s been a split be­tween pos­i­tivists who wanted to re­duce ev­ery­thing to log­i­cal con­structs plus first-per­son ex­pe­rience, and pos­i­tivists who wanted to re­duce ev­ery­thing to log­i­cal con­structs plus third-per­son phys­i­cal sci­ence. I per­son­ally find the lat­ter ap­proach more plau­si­ble, though I un­der­stand the post-Carte­sian ap­peal of Rus­sell’s phe­nom­e­nal­ist pro­ject. But it trou­bles me to see the two sides in­sist­ing, with equal ve­he­mance, that the other side is not only mis­taken but speak­ing gib­ber­ish. Even as an elimi­na­tive phys­i­cal­ist and an Enemy of Qualia, I find it plau­si­ble that we have some (per­haps fun­da­men­tally mis­taken) con­cept of a differ­ence be­tween ex­pe­riences (which are ‘from a van­tage point’) and ob­jec­tive events (which lack any ‘point-of-view’ struc­ture). If there’s any­thing gen­uinely un­der dis­pute be­tween the first-per­son camp and the third-per­son camp, then this pro­vides a sim­ple ex­am­ple of why GRTt is false: Sim­ply for gram­mat­i­cal rea­sons, there are false­hoods (in­dex­i­cals, per­haps) that can­not be perfectly ex­pressed in phys­i­cal terms. That doesn’t mean that we can’t phys­i­cal­is­ti­cally de­scribe why and how some­one came to as­sert P; it just means we can’t as­sert P our­selves in our stripped-down fun­da­men­tal lan­guage.

Per­haps this is a more palat­able way to put it: We can ex­plain in purely phys­i­cal and log­i­cal terms why ev­ery false sen­tence is false. But there is no one-to-one cor­re­spon­dence be­tween false non-fun­da­men­tal as­ser­tions and false fun­da­men­tal as­ser­tions. Rather, in cases like ‘there are no gods’ and ‘there are no ghosts,’ there is a many-to-one re­la­tion­ship, since all state­ments of those sorts are made true by the con­junc­tion of all the phys­i­cal and log­i­cal truths (in­clud­ing the to­tal­ity fact). But it’s im­plau­si­ble to treat this Gi­gan­tic Fact as the phys­i­cal mean­ing or fi­nal anal­y­sis of false­hoods like ‘I have ex­pe­rienced red­ness-qualia.’

there’s no strictly co­her­ent and mean­ingful (R2) way to de­scribe what doesn’t ex­ist (i.e. non-phys­i­cal ghosts)

That seems like too strong of a state­ment. Surely we can ex­press false­hoods (in­clud­ing false ex­is­ten­tial gen­er­al­iza­tions) in our finished phys­i­cal/​log­i­cal lan­guage. We can de­scribe situ­a­tions and ob­jects that don’t ex­ist. The ques­tion is just whether the de­scrip­tive el­e­ments our sparse lan­guage uti­lizes will be up to the task of con­struct­ing ev­ery mean­ingful pred­i­cate (and in a way that al­lows our lan­guage to as­sert the pred­i­ca­tion, not just to de­scribe the act of some­one else as­sert­ing it). So far, that seems to me to be more open to doubt than does gar­den-va­ri­ety phys­i­cal­ism.

I don’t see any­thing wrong with this kind of self-refer­ence. We can only ex­plain what gen­er­al­iza­tions are by as­sert­ing gen­er­al­iza­tions about gen­er­al­iza­tion; but that doesn’t un­der­mine gen­er­al­iza­tion it­self. GRT would only be an im­me­di­ate prob­lem for it­self if GRT didn’t en­com­pass it­self.

Okay, so lets as­sume that the gen­er­al­iza­tion side of things is not a prob­lem, though I hope you’ll grant me that if a gen­er­al­iza­tion about x’s is mean­ingful, propo­si­tions ex­press­ing x’s in­di­vi­d­u­ally are mean­ingful. That is, if ‘ev­ery mean­ingful propo­si­tion can be ex­pressed by physics+logic (even­tu­ally)‘, then ‘the propo­si­tion “the cat is on the mat” is mean­ingful’ is mean­ingful. It’s this that I’m wor­ried about, and the gen­er­al­iza­tion only in­di­rectly. So:

1) A propo­si­tion is mean­ingful if and only if it is ex­press­ible by physics+logic, or merely by logic.

2) If a propo­si­tion is ex­press­ible by physics+logic, it con­strains the pos­si­ble wor­lds.

3) If the propo­si­tion “the cat is on the mat” is mean­ingful, and it is ex­press­ible by physics+logic, then it con­strains the pos­si­ble wor­lds.

4) If the propo­si­tion “the cat is on the mat” con­strains the pos­si­ble wor­lds, then the propo­si­tion “the propo­si­tion ‘the cat is on the mat’ is mean­ingful” does not con­strain the pos­si­ble wor­lds. Namely, no propo­si­tion of the form ‘”XYZ” con­strains the pos­si­ble wor­lds’ it­self con­strains the pos­si­ble wor­lds.

So if ‘XYZ’ con­strains the pos­si­ble wor­lds, then for ev­ery pos­si­ble world, XYZ is ei­ther true of that world or false of that world. But if the propo­si­tion ‘”XYZ” con­strains the pos­si­ble wor­lds’ ex­presses sim­ply that, namely that for ev­ery pos­si­ble world XYZ is ei­ther true or false of that world, then there is no world of which ‘”XYZ” con­strains the pos­si­ble wor­lds’ is false.

5) The propo­si­tion ‘the propo­si­tion “the cat is on the mat” is mean­ingful’ is not both mean­ingful and ex­press­ible by physics+logic. But it is mean­ingful, and there­fore (as per premise 1) it is ex­press­ible by mere logic.

6) Every gen­er­al­iza­tion about a purely log­i­cal claim is it­self a purely log­i­cal claim (I’m not sure about this premise)

7) The GRT is a purely log­i­cal claim.

I’m think­ing EY wants to get off the GRT boat here: I don’t think he in­tends the GRT to be a log­i­cal ax­iom or deriv­able from log­i­cal ax­ioms. Nev­er­the­less, if he does want the GRT to be an ax­iom of logic, and in or­der for it to be a mean­ingful ax­iom of logic, it still has to pick out one log­i­cal model as op­posed to an­other.

But here, the prob­lem sim­ply re­curs. If ‘The propo­si­tion ‘GRT’ is mean­ingful’ is mean­ingful then it doesn’t, in the rele­vant re­spect, pick out one log­i­cal model as op­posed to an­other.

2) If a propo­si­tion is ex­press­ible by physics+logic, it con­strains the pos­si­ble wor­lds.

I don’t think we need this rule. It would make log­i­cal truths /​ tau­tolo­gies mean­ingless, in­ex­press­ible, or mag­i­cal. (We shouldn’t dive into Wittgen­stei­nian mys­ti­cism that read­ily.)

4) If the propo­si­tion “the cat is on the mat” con­strains the pos­si­ble wor­lds, then the propo­si­tion “the propo­si­tion ‘the cat is on the mat’ is mean­ingful” does not con­strain the pos­si­ble wor­lds.

That de­pends on what you mean by “propo­si­tion.” The writ­ten sen­tence “the cat is on the mat” could have been un­gram­mat­i­cal or se­man­ti­cally null, like “col­or­less green ideas sleep fu­ri­ously.” After all, a differ­ent lin­guis­tic com­mu­nity could have ex­isted in the role of the English lan­guage. So our se­man­tic as­ser­tion could be rul­ing out wor­lds where “the cat is on the mat” is ill-formed.

On the other hand, if by “propo­si­tion” you mean “the spe­cific mean­ing of a sen­tence,” then your sen­tence is re­ally say­ing “the mean­ing of ‘the cat is on the mat’ is a mean­ing,” which is just a spe­cial case of the tau­tol­ogy “mean­ings are mean­ings.” So if we aren’t com­mit­ted to deem­ing tau­tolo­gies mean­ingless in the first place, we won’t be com­mit­ted to deem­ing this par­tic­u­lar tau­tol­ogy mean­ingless.

But if the propo­si­tion ‘”XYZ” con­strains the pos­si­ble wor­lds’ ex­presses sim­ply that, namely that for ev­ery pos­si­ble world XYZ is ei­ther true or false of that world, then there is no world of which ‘”XYZ” con­strains the pos­si­ble wor­lds’ is false.

This looks like a prob­lem of self-refer­ence, but it’s re­ally a prob­lem of essence-se­lec­tion. When we iden­tify some­thing as ‘the same thing’ across mul­ti­ple mod­els or pos­si­ble wor­lds, we’re stipu­lat­ing an ‘essence,’ a set of prop­er­ties pro­vid­ing iden­tity-con­di­tions for an ob­ject. Without such a stipu­la­tion, we couldn’t (per Leib­niz’s law) iden­tify ob­jects as be­ing ‘the same’ while they vary in tem­po­ral, spa­tial, or other prop­er­ties. If we don’t in­clude the spe­cific mean­ing of a sen­tence in its essence, then we can al­low that the ‘same’ sen­tence could have had a differ­ent mean­ing, i.e., that there are mod­els in which sen­tence P does not ex­press the se­man­tic con­tent ‘Q.’ But if we in­stead treat the mean­ing of P as part of what makes a sen­tence in a given model P, then it is con­tra­dic­tory to al­low the pos­si­bil­ity that P would lack the mean­ing ‘Q,’ just as it would be con­tra­dic­tory to al­low the pos­si­bil­ity that P could have ex­isted with­out P ex­ist­ing.

What’s im­por­tant to keep in mind is that which of these cases arises is a mat­ter of our de­ci­sion. It’s not a deep meta­phys­i­cal truth that some essences are ‘right’ and some are ‘wrong;’ our in­ter­ests and com­pu­ta­tional con­straints are all that force us to think in terms of es­sen­tial and inessen­tial prop­er­ties at all.

If ‘The propo­si­tion ‘GRT’ is mean­ingful’ is mean­ingful then it doesn’t, in the rele­vant re­spect, pick out one log­i­cal model as op­posed to an­other.

Only be­cause you’ve stipu­lated that mean­ingful­ness is es­sen­tial to GRT (and to propo­si­tions in gen­eral). This isn’t a spooky prob­lem; you could have gen­er­ated the same prob­lem by claiming that ‘all cats are mam­mals’ fails to con­strain the pos­si­ble wor­lds, on the grounds that cats are es­sen­tially mam­mals, i.e., in all wor­lds if x is a non-mam­mal then we im­me­di­ately know it’s a non-cat (among other things). Some­one with a differ­ent defi­ni­tion of ‘cat,’ or of ‘GRT,’ would have ar­rived at a differ­ent con­clu­sion. But we can’t just say willy-nilly that all truths are es­sen­tially true; oth­er­wise the only pos­si­ble world will be the ac­tual world, per­haps a plau­si­ble claim meta­phys­i­cally but not at all a plau­si­ble claim epistem­i­cally. (And real pos­si­bil­ity is epistemic, not meta­phys­i­cal.)

Also, ‘GRT’ is not in any case log­i­cally true; cer­tainly it is not an ax­iom, and there is no rea­son to treat it as one.

I don’t think we need this rule. It would make log­i­cal truths /​ tau­tolo­gies mean­ingless, in­ex­press­ible, or mag­i­cal. (We shouldn’t dive into Wittgen­stei­nian mys­ti­cism that read­ily.)

No, I didn’t say that con­strain­ing pos­si­ble wor­lds is a nec­es­sary con­di­tion on mean­ing. I said this:

1) A propo­si­tion is mean­ingful if and only if it is ex­press­ible by physics+logic, or merely by logic.

2) If a propo­si­tion is ex­press­ible by physics+logic, it con­strains the pos­si­ble wor­lds.

This leaves open the pos­si­bil­ity of mean­ingful, non-world-con­strain­ing propo­si­tions (e.g. tau­tolo­gies, such as the claims of logic), only they are not physics+logic ex­press­ible, but only logic ex­press­ible.

That de­pends on what you mean by “propo­si­tion.” The writ­ten sen­tence “the cat is on the mat” could have been un­gram­mat­i­cal or se­man­ti­cally null, like “col­or­less green ideas sleep fu­ri­ously.”

That’s not rele­vant to my point. I’d be happy to re­place it with any propo­si­tion we can agree (for the sake of ar­gu­ment) to be mean­ingful. In fact, my ar­gu­ment will run with an un­mean­ingful propo­si­tion (if such a thing can be said to ex­ist) as well.

On the other hand, if by “propo­si­tion” you mean “the spe­cific mean­ing of a sen­tence,”

No, this isn’t what I mean. By ‘propo­si­tion’ I mean a sen­tence, con­sid­ered in­de­pen­dently of its par­tic­u­lar man­i­fes­ta­tion in a lan­guage. For ex­am­ple, ‘Sch­nee ist weiss’ and ‘Snow is white’ ex­press the same propo­si­tion. Say­ing and writ­ing ‘shnee ist weiss’ ex­press the same propo­si­tion.

This looks like a prob­lem of self-refer­ence, but it’s re­ally a prob­lem of essence-se­lec­tion. When we iden­tify some­thing as ‘the same thing’ across mul­ti­ple mod­els or pos­si­ble wor­lds...

I didn’t un­der­stand this. Propo­si­tions (as op­posed to things which ex­press propo­si­tions) are not “in” wor­lds, and noth­ing of my ar­gu­ment in­volved iden­ti­fy­ing any­thing across mul­ti­ple wor­lds. EY’s OP stated that in or­der for an [em­piri­cal] claim to be mean­ingful, it has to con­strain pos­si­ble wor­lds, e.g. dis­t­in­guish those wor­lds in which it is true from those in which it is false. Since a state­ment about the mean­ingful­ness of propo­si­tions doesn’t do this (i.e. it’s a pri­ori true or false of all pos­si­ble wor­lds), it can­not be an em­piri­cal claim.

So I haven’t said any­thing about essence, nor does any part of my ar­gu­ment re­quire refer­ence to essence.

Also, ‘GRT’ is not in any case log­i­cally true; cer­tainly it is not an ax­iom, and there is no rea­son to treat it as one.

Agreed, it is not a merely log­i­cal claim. Given that it is also not an em­piri­cal (i.e. a physics+logic claim), and given my premise (1), which I take EY to hold, then we can con­clude that the GRT is mean­ingless.

My mis­take. When you said “physics+logic,” I thought you were talk­ing about ex­press­ing propo­si­tions in gen­eral with physics and/​or logic (as op­posed to re­duc­ing ev­ery­thing to logic), rather than talk­ing about mixed-refer­ence as­ser­tions in par­tic­u­lar (as op­posed to ‘pure’ logic). I think you’ll need to ex­plain what you mean by “logic”; Eliezer’s no­tion of mixed refer­ence al­lows that some state­ments are just physics, with­out any log­i­cal con­structs added.

On the other hand, if by “propo­si­tion” you mean “the spe­cific mean­ing of a sen­tence,”

No, this isn’t what I mean. By ‘propo­si­tion’ I mean a sen­tence, con­sid­ered in­de­pen­dently of its par­tic­u­lar man­i­fes­ta­tion in a lan­guage. For ex­am­ple, ‘Sch­nee ist weiss’ and ‘Snow is white’ ex­press the same propo­si­tion. Say­ing and writ­ing ‘shnee ist weiss’ ex­press the same propo­si­tion.

What ‘Sch­nee ist weiss’ and ‘Snow is white’ have in com­mon is their mean­ing, their sense. A propo­si­tion is the spe­cific mean­ing of a declar­a­tive sen­tence, i.e., what it de­clares.

I didn’t un­der­stand this. Propo­si­tions (as op­posed to things which ex­press propo­si­tions) are not “in” worlds

Then they don’t ex­ist. By ‘the world’ I sim­ply mean ‘ev­ery­thing that is,’ and by ‘pos­si­ble world’ I just mean ‘how ev­ery­thing-that-is could have been.’ The rep­re­sen­ta­tional con­tent of as­ser­tions (i.e., their propo­si­tions), even if they some­how ex­ist out­side the phys­i­cal world, still have to be re­lated in par­tic­u­lar ways to our ut­ter­ances, and those re­la­tions can vary across phys­i­cal wor­lds even if propo­si­tions (con­strued non-phys­i­cally) can­not. The ut­ter­ance ‘the cat is on the mat’ in our world ex­presses the propo­si­tion . But in other wor­lds, ‘the cat is on the mat’ could have ex­pressed a differ­ent propo­si­tion, or no propo­si­tion at all. Now let’s re­visit your (4):

“If the propo­si­tion “the cat is on the mat” con­strains the pos­si­ble wor­lds, then the propo­si­tion “the propo­si­tion ‘the cat is on the mat’ is mean­ingful” does not con­strain the pos­si­ble wor­lds.”

A clearer way to put this is: If the propo­si­tion p, , varies in truth-value across pos­si­ble wor­lds, then the dis­tinct propo­si­tion q,

, does not vary in truth-value across pos­si­ble wor­lds. But what does it mean to say that a propo­si­tion is mean­ingful? Propo­si­tions just are the mean­ing of as­ser­tions. There is no such thing as a ‘mean­ingless propo­si­tion.’ So we can rephrase q as re­ally say­ing:

. In other words, you are claiming that all propo­si­tions ex­ist nec­es­sar­ily, that they ex­ist at (or rel­a­tive to) ev­ery pos­si­ble world, though their truth-value may or may not vary from world to world. Once we an­a­lyze away the claim that propo­si­tions are ‘mean­ingful’ as re­ally just the claim that cer­tain propo­si­tions/​mean­ings ex­ist, do you still have any ob­jec­tions or con­cerns?

(Also, it should be ob­vi­ous to any­one who thinks that ‘pos­si­ble wor­lds’ are mere con­structs that do not ul­ti­mately ex­ist, that ‘propo­si­tions’ are also mere con­structs in the same way. We can choose to in­ter­re­late these two con­structs in var­i­ous ways, but if we en­dorse phys­i­cal­ism we can also rea­son us­ing one while hold­ing con­stant the fact that the other doesn’t ex­ist.)

Given that it is also not an em­piri­cal (i.e. a physics+logic claim), and given my premise (1), which I take EY to hold, then we can con­clude that the GRT is mean­ingless.

No, GRT is an em­piri­cal claim. You defined GRT as the propo­si­tion . But the ac­tual Great Re­duc­tive Th­e­sis says: . Every­thing true is mean­ingful, so your for­mu­la­tion is part of GRT; but it isn’t the whole thing. An equiv­a­lent way to for­mu­late GRT is as the con­junc­tion of the fol­low­ing two the­ses:

Ex­press­ibil­ity: All propo­si­tions that are true in our world can be ex­pressed by ut­ter­ances in our world.

Logico-Phys­i­cal­ism: Every propo­si­tion that is true in our world is ei­ther purely phys­i­cal-and/​or-log­i­cal, or can be com­pletely an­a­lyzed into a true propo­si­tion that is purely phys­i­cal-and/​or-log­i­cal.

Both 1 and 2 are em­piri­cal claims; we could imag­ine wor­lds where ei­ther one is false, or where both are. But we may have good rea­son to sus­pect that we do not in­habit such a world, be­cause there are no in­ex­press­ible truths and no ir­re­ducibly nei­ther-phys­i­cal-nor-log­i­cal truths. For ex­am­ple, we could have lived in a world in which qualia were real and in­ex­press­ible (which would vi­o­late Ex­press­ibil­ity), and/​or one in which they were real and ir­re­ducible (which would vi­o­late Logico-Phys­i­cal­ism). But the phys­i­cal­is­ti­cally in­clined doubt that there are such qualia in our uni­verse.

We have a cou­ple of easy is­sues to get out of the way. The first is the use of the term ‘propo­si­tion’. That term is fa­mously am­bigu­ous, and so I’m not at­tached to us­ing it in one way or an­other, if I can make my­self un­der­stood. I’m just try­ing to use this term (and all my terms) as EY is us­ing them. In this case, I took my cue from this: http://​​less­wrong.com/​​lw/​​eqn/​​the_use­ful_idea_of_truth/​​

Med­i­ta­tion: What rule could re­strict our be­liefs to just propo­si­tions that can be mean­ingful, with­out ex­clud­ing a pri­ori any­thing that could in prin­ci­ple be true?

EY does not seem to in­tend ‘propo­si­tion’ here to be iden­ti­cal to ‘mean­ing’. At any rate, I’m happy to use what­ever term you like, though I wish to dis­cuss the bear­ers of truth value, and not mean­ings.

You defined GRT as the propo­si­tion . But the ac­tual Great Re­duc­tive Th­e­sis says: .

I don’t want to define the GRT at all. I’m us­ing EY’s defi­ni­tion, from the OP:

And the Great Re­duc­tion­ist Th­e­sis can be seen as the propo­si­tion that ev­ery­thing mean­ingful can be ex­pressed this way even­tu­ally.

You might want to dis­agree with EY about this, but for the pur­poses of my ar­gu­ment I just want to talk about EY’s con­cep­tion of the GRT. Nev­er­the­less, I think EY’s con­cep­tion, and there­fore mine, fol­lows from yours, so it may not mat­ter much as long as you ac­cept that ev­ery­thing false should also be ex­press­ible by physics+logic (as EY, I be­lieve, wants to main­tain).

I’d like to get these two is­sues out of the way be­fore re­spond­ing to the rest of your in­ter­est­ing post. Let me know what you think.

Eliezer is not very at­ten­tive to the dis­tinc­tion be­tween propo­si­tions, sen­tences (or sen­tence-types), and ut­ter­ances (or sen­tence-to­kens). We need not im­port that am­bi­guity; it’s already caused prob­lems twice, above. An ut­ter­ance is a spe­cific, spa­tiotem­po­rally lo­cated com­mu­ni­ca­tion. Two differ­ent ut­ter­ances may be the same sen­tence if they are ex­pressed in the same way, and they in­tend the same propo­si­tion if they ex­press the same mean­ing. So:

A) ‘Sch­nee ist weiss.’
B) ‘Snow is white.’
C) ‘Snow is white.’

There are three ut­ter­ances above, two dis­tinct sen­tences (or sen­tence-types), and only one dis­tinct propo­si­tion/​mean­ing. Clearer?

You might want to dis­agree with EY about this, but for the pur­poses of my ar­gu­ment I just want to talk about EY’s con­cep­tion of the GRT.

EY mis­spoke. As with the propo­si­tion/​ut­ter­ance con­fu­sion, my in­ter­est is in eval­u­at­ing the sub­stan­tive mer­its or dis­mer­its of an Eliezer steel man, not in fix­at­ing on his overly lax word choice. Re­duc­tion­ism is falsified if they are true sen­tences that can­not be re­duced, not just if there are mean­ingful but false ones that can­not be so re­duced. It’s ob­vi­ous that EY isn’t con­cerned with the re­ducibil­ity of false sen­tences be­cause he doesn’t con­sider it a grave threat, for ex­am­ple, that the sen­tence “Some prop­er­ties are not re­ducible to physics or logic.” is mean­ingful.

There are three ut­ter­ances above, two dis­tinct sen­tences (or sen­tence-types), and only one dis­tinct propo­si­tion/​mean­ing. Clearer?

Which one is the proper ob­ject of truth-eval­u­a­tion, and which one is sub­ject to the ques­tion ‘is it mean­ingful’? EY’s po­si­tion through­out this se­quence, I think, has been that whichever is the proper ob­ject of truth-eval­u­a­tion is also the one about which we can ask ‘is it mean­ingful?’ If you don’t think these can be the same, then your view differs from EY’s sub­stan­tially, and not just in ter­minol­ogy. How about this? I’ll use the term ‘gax’ for the thing that is a) prop­erly truth-evaluable, and b) sub­ject to the ques­tion ‘is this mean­ingful’.

EY mis­spoke.

Maybe, but the en­tire se­quence is about the ques­tion of a crite­rion for the mean­ingful­ness of gaxes. His mo­ti­va­tion may well be to avert the dis­aster of con­sid­er­ing a true gax to be mean­ingless, but his stated goal through­out the se­quence is es­tab­lish­ing a crite­rion for mean­ingful­ness. So I guess I have to ask at this point: other than the fact that you think his ar­gu­ment stands stronger with your ver­sion of the GRT, do you have any ev­i­dence (stronger than his ex­plicit state­ment oth­er­wise) that this is EY’s ac­tual view?

The propo­si­tion/​mean­ing is what we eval­u­ate for truth. Thus ut­ter­ances shar­ing the same propo­si­tion can­not differ in truth-value.

and which one is sub­ject to the ques­tion ‘is it mean­ingful’?

Ut­ter­ances or ut­ter­ance-types can be eval­u­ated for mean­ingful­ness. To ask ‘Is that ut­ter­ance mean­ingful?’ is equiv­a­lent to ask­ing, for ap­par­ent declar­a­tive sen­tences, ‘Does that ut­ter­ance cor­re­spond to a propo­si­tion/​mean­ing?’

EY’s po­si­tion through­out this se­quence, I think, has been that whichever is the proper ob­ject of truth-eval­u­a­tion is also the one about which we can ask ‘is it mean­ingful?’

You could ask whether sen­tence-types or -to­kens in­tend propo­si­tions (i.e., ‘are they mean­ingful?‘), and, if they do in­tend propo­si­tions, whether they are true (i.e., whether the propo­si­tions cor­re­spond to an ob­tain­ing fact). But, judg­ing by how Eliezer uses the word ‘propo­si­tion,’ he doesn’t have a spe­cific stance on what we should be eval­u­at­ing for truth or mean­ingful­ness. He’s speak­ing loosely.

the en­tire se­quence is about the ques­tion of a crite­rion for the mean­ingful­ness of gaxes (in his words).

I think the se­quence is about truth, not mean­ing. He takes mean­ing largely for granted, in or­der to dis­cuss truth-con­di­tions for differ­ent classes of sen­tence. He gave a cou­ple of hints at ways to de­ter­mine that some ut­ter­ance is mean­ingless, but he hasn’t at all gone into the meta-se­man­tic pro­ject of es­tab­lish­ing how ut­ter­ances ac­quire their con­tent or how con­tent in the brain gets ‘glued’ (refer­ence mag­netism) to propo­si­tions with well-defined truth-con­di­tions. He hasn’t said any­thing about what sorts of ob­jects can and can’t be mean­ingful, or about the mean­ing of non-as­sertive ut­ter­ances, or about how we could de­sign an A.I. with in­ten­tion­al­ity (cf. the Chi­nese room), or about what in the world non-em­piri­cal state­ments de­note. So I take it that he’s mostly in­ter­ested in truth here, and mean­ing is just one of the step­ping stones in that di­rec­tion. Hence I don’t take his talk of ‘propo­si­tions’ too se­ri­ously.

other than the fact that you think his ar­gu­ment stands stronger with your ver­sion of the GRT, do you have any ev­i­dence (stronger than his ex­plicit state­ment oth­er­wise) that this is EY’s ac­tual view?

It would be a waste of effort to dig other ev­i­dence up. Ascribing your ver­sion of GRT to Eliezer re­quires us to the­o­rize that he didn’t spend 30 sec­onds think­ing about GRT, since 30 sec­onds is all it would take to de­ter­mine its false­hood. If that ver­sion of GRT is his view, then his view can be dis­missed im­me­di­ately and we can move on to more in­ter­est­ing top­ics. If my ver­sion of GRT is closer to his view, then we can con­tinue to dis­cuss whether the bal­ance of ev­i­dence sup­ports it. So re­gard­less of EY’s ac­tual views, it’s pointless to dwell on the Most Ab­surd Pos­si­ble In­ter­pre­ta­tion thereof, es­pe­cially since not a sin­gle one of his claims el­se­where in the se­quence de­pends on or sup­ports the claim that all ir­re­ducibly non-phys­i­cal and non-log­i­cal claims are mean­ingless.

But, judg­ing by how Eliezer uses the word ‘propo­si­tion,’ he doesn’t have a spe­cific stance on what we should be eval­u­at­ing for truth or mean­ingful­ness. He’s speak­ing loosely.

Okay, it doesn’t look like we can make any progress here, since we can­not agree on what EY’s stance is sup­posed to be. I think you’re wrong that EY hasn’t said much about the prob­lem of mean­ing in this se­quence. That’s been its ex­plicit and con­tin­u­ous sub­ject. The ques­tion through­out has been

What rule would re­strict our be­liefs to just state­ments that can be mean­ingful, with­out ex­clud­ing a pri­ori any­thing that could in prin­ci­ple be true?

...and this seems to have been dis­cussed through­out, e.g.:

Be­ing able to imag­ine that your thoughts are mean­ingful and that a cor­re­spon­dence be­tween map and ter­ri­tory is be­ing main­tained, is no guaran­tee that your thoughts are true. On the other hand, if you can’t even imag­ine within your own model how a piece of your map could have a trace­able cor­re­spon­dence to the ter­ri­tory, that is a very bad sign for the be­lief be­ing mean­ingful, let alone true. Check­ing to see whether you can imag­ine a be­lief be­ing mean­ingful is a test which will oc­ca­sion­ally throw out bad be­liefs, though it is no guaran­tee of a be­lief be­ing good.

Okay, but what about the idea that it should be mean­ingful to talk about whether or not a space­ship con­tinues to ex­ist af­ter it trav­els over the cos­molog­i­cal hori­zon? Doesn’t this the­ory of mean­ingful­ness seem to claim that you can only sen­si­bly imag­ine some­thing that makes a differ­ence to your sen­sory ex­pe­riences?

But if you’ve been read­ing the same se­quence I have, and we still don’t agree on that, then we should prob­a­bly move on. That said...

If that ver­sion of GRT is his view, then his view can be dis­missed im­me­di­ately and we can move on to more in­ter­est­ing top­ics.

I’d be in­ter­ested to know what you have in mind here. Why would the ‘mean­ingful­ness’ ver­sion of the GRT be so easy to dis­miss?

it’s pointless to dwell on the Most Ab­surd Pos­si­ble In­ter­pre­ta­tion thereof

I want, first, to be clear that I’ve found this con­ver­sa­tion very helpful and in­ter­est­ing (as all my con­ver­sa­tions with you have been). Se­cond, the above is un­fair: un­der­stand­ing EY in terms of what he ex­plic­itly and liter­ally says is not ‘the most ab­surd pos­si­ble in­ter­pre­ta­tion’. It may be the wrong in­ter­pre­ta­tion, but to take him at face value can­not be called ab­surd.

The col­lo­quial mean­ing of “propo­si­tion” is “an as­ser­tion or pro­posal”. The sim­plest ex­pla­na­tion for EY’s use of the term is that he was os­cillat­ing some­what be­tween this col­lo­quial sense and its stric­ter philo­soph­i­cal mean­ing, “the truth-func­tional as­pect of an as­ser­tion”. A state­ment’s philo­soph­i­cal propo­si­tion is (or is iso­mor­phic to) its mean­ing, es­pe­cially inas­much as its mean­ing bears on its truth-con­di­tions.

Con­fu­sion arose be­cause EY spoke of ‘mean­ingless’ propo­si­tions in the col­lo­quial sense, i.e., mean­ingless lin­guis­tic ut­ter­ances of a seem­ingly as­sertive form. If we mis­in­ter­pret this as as­sert­ing the ex­is­tence of mean­ingless propo­si­tions in the philo­soph­i­cal sense, then we sud­denly lose track of what a ‘propo­si­tion’ even is.

The in­tu­itive idea of a propo­si­tion is that it’s what differ­ent sen­tences that share a mean­ing have in com­mon; treat­ing propo­si­tions as the lo­cus of truth-eval­u­a­tion al­lows us to rule out any doubt as to whether “Sch­nee ist weiss.” and “Snow is white.” could have differ­ent truth-val­ues while hav­ing iden­ti­cal mean­ings. But if we as­sert that there are also propo­si­tions cor­re­spond­ing to mean­ingless lo­cu­tions, or that some propo­si­tions are non-truth-func­tional, then it ceases to be clear what is or isn’t a ‘propo­si­tion,’ and the term en­tirely loses its the­o­ret­i­cal value. Since Eliezer has made no un­equiv­o­cal as­ser­tion about there be­ing mean­ingless propo­si­tions in the philo­soph­i­cal sense, the sim­pler and more char­i­ta­ble in­ter­pre­ta­tion is that he was just speak­ing loosely and in­for­mally.

My sense is that he’s spent a lit­tle too much time im­mersed in pos­i­tivis­tic cul­ture, and has bor­rowed their way of speak­ing to an ex­tent, even though he re­jects and com­pli­cates most of their doc­trines (e.g., al­low­ing that em­piri­cally untestable doc­trines can be mean­ingful). This makes it a lit­tle harder to grasp his mean­ing and pur­pose at times, but it doesn’t weaken his doc­trines, char­i­ta­bly con­strued.

But if you’ve been read­ing the same se­quence I have, and we still don’t agree on that

I just have higher stan­dards than you do for what it takes to be giv­ing a com­plete ac­count of mean­ing, as op­posed to a com­plete ac­count of ‘truth’. My claim is not that Eliezer has said noth­ing about mean­ing; it’s that he’s only touched on mean­ing to get a bet­ter grasp on truth (or on war­ranted as­ser­tion in gen­eral), which is why he hasn’t been as care­ful about dis­t­in­guish­ing and un­pack­ing metase­man­tic dis­tinc­tions such as ut­ter­ance-vs.-propo­si­tion as he has been about dis­t­in­guish­ing and un­pack­ing se­man­tic and meta­phys­i­cal dis­tinc­tions such as phys­i­cal-vs.-log­i­cal.

Why would the ‘mean­ingful­ness’ ver­sion of the GRT be so easy to dis­miss?

As I said above, “Some prop­er­ties are not re­ducible to physics or logic.” is a mean­ingful state­ment that is in­com­pat­i­ble with the GRT world-view. It is mean­ingful, though it may be false; if the de­nial of GRT were mean­ingless, then GRT would be a tau­tol­ogy, and Eliezer would as­sign it Pr ap­proach­ing 1, whereas in fact he as­signs it Pr .5.

Eliezer’s claim has not been, for ex­am­ple, that epiphe­nom­e­nal­ism, be­ing anti-phys­i­cal­is­tic, is gib­ber­ish; his claim has been that it is false, and that no ev­i­dence can be given in sup­port of it. If he thought it were gib­ber­ish, then his re­jec­tion of it would count as gib­ber­ish too.

un­der­stand­ing EY in terms of what he ex­plic­itly and liter­ally says is not ‘the most ab­surd pos­si­ble in­ter­pre­ta­tion’.

It’s not the most ab­surd in­ter­pre­ta­tion in that it has the least ev­i­dence as an in­ter­pre­ta­tion. It’s the most ab­surd inas­much as it as­cribes a max­i­mally ab­surd (be­cause in­ter­nally in­con­sis­tent) world-view to EY, i.e., the world-view that the nega­tion of re­duc­tion­ism is both mean­ingless and (with prob­a­bil­ity .5) true. Again, the sim­plest ex­pla­na­tion is sim­ply that he was speak­ing loosely, and when he said “ev­ery­thing mean­ingful can be ex­pressed this way even­tu­ally” he meant “ev­ery­thing ex­press­ible that is the case can be ex­pressed this way [i.e., phys­i­cally-and-log­i­cally] even­tu­ally”. He was, in other words, tac­itly re­strict­ing his do­main to truths, and hop­ing his read­er­ship would rec­og­nize that false­hoods are be­ing brack­eted. Other­wise this post would be about ar­gu­ing for the mean­ingless­ness of doc­trines like epiphe­nom­e­nal­ism and the­ism, rather than ar­gu­ing for the re­ducibil­ity of un­ortho­dox truths (e.g., coun­ter­fac­tu­als and ap­plied/​‘wor­ldly’ math­e­mat­ics).

As I said above, “Some prop­er­ties are not re­ducible to physics or logic.” is a mean­ingful state­ment that is in­com­pat­i­ble with the GRT world-view. It is mean­ingful, though it may be false; if the de­nial of GRT were mean­ingless, then GRT would be a tau­tol­ogy, and Eliezer would as­sign it Pr ap­proach­ing 1, whereas in fact he as­signs it Pr .5.

So we’re as­sum­ing for the pur­poses of your ar­gu­ment here that the GRT is about mean­ingful­ness, and we should dis­t­in­guish this from your (and per­haps EY’s) con­sid­ered view of the GRT. So lets call the ‘mean­ingful­ness’ ver­sion I at­tributed to EY GRTm, and the one you at­tribute to him GRTt.

We can gloss the differ­ence thusly: the GRTt states that any­thing true must be ex­press­ible in phys­i­cal+log­i­cal, or merely log­i­cal terms (tau­tolo­gies, etc.).

The GRTm states that any­thing true or false must be ex­press­ible phys­i­cal+log­i­cal, or merely log­i­cal terms.

Your ar­gu­ment ap­pears to be that on the GRTm view, the sen­tence “some prop­er­ties are not re­ducible to physics or logic” would be mean­ingless rather than false. You take this to be a re­duc­tio, be­cause that sen­tence is clearly mean­ingful and false. Why do you think that, on the GRTm, this sen­tence would be mean­ingless? The GRTm view, along with the GRTt view, al­lows that false state­ments can be mean­ingful. And I see no rea­son to think that the above sen­tence couldn’t be ex­pressed in physics+logic, or merely log­i­cal terms.

So I’m not see­ing the force of the re­duc­tio. You don’t ar­gue for the claim that “some prop­er­ties are not re­ducible to physics or logic” would be mean­ingless on the GRTm view, so could you go into some more de­tail there?

One way to get at what I was say­ing above is that GRTt as­serts that all true state­ments are an­a­lyz­able into truth-con­di­tions that are purely phys­i­cal/​log­i­cal, while GRTm as­serts that all mean­ingful state­ments are an­a­lyz­able into truth-con­di­tions that are purely phys­i­cal/​log­i­cal. If we an­a­lyze “Some prop­er­ties are not re­ducible to physics or logic.” into phys­i­cal/​log­i­cal truth-con­di­tions, we find that there is no state we can de­scribe on which it is true; so it be­comes a log­i­cal false­hood, a state­ment that is false given the empty set of as­sump­tions. Equally, GRTm, if mean­ingful, is a tau­tol­ogy if we an­a­lyze its mean­ing in terms of its logico-phys­i­cally ex­press­ible truth-con­di­tions; there is no par­tic­u­lar state of af­fairs we can de­scribe in logico-phys­i­cal terms in which GRTm is false.

But per­haps fo­cus­ing on anal­y­sis into truth-con­di­tions isn’t the right ap­proach. Shift­ing to your con­cep­tion of GRTm and GRTt, can you find any points where Eliezer ar­gues for GRTm? An ar­gu­ment for GRTm might have the fol­low­ing struc­ture:

Some sen­tences seem to as­sert non-phys­i­cal, non-log­i­cal things.

But the non-physi­colog­i­cal­ity of those things makes those sen­tences mean­ingless.

On the other hand, if Eliezer is re­ally try­ing to en­dorse GRTt, his ar­gu­ments will in­stead look like this:

Some sen­tences seem to be true but non-physi­colog­i­cal.

But those sen­tences are ei­ther false or an­a­lyz­able/​re­ducible to purely physi­colog­i­cal truths.

So non-physi­colog­i­cal truths in gen­eral are prob­a­bly ex­press­ible purely physi­colog­i­cally.

No­tice that the lat­ter ar­gu­men­ta­tive ap­proach is the one he takes in this very ar­ti­cle, where he in­tro­duces ‘The Great Re­duc­tion­ist Pro­ject.’ This gives us strong rea­son to fa­vor GRTt as an in­ter­pre­ta­tion over GRTm, even though viewed in iso­la­tion some of his lan­guage does sug­gest GRTm. Is there any di­alec­ti­cal ev­i­dence in fa­vor of the al­ter­na­tive in­ter­pre­ta­tion GRTm? (I.e., ev­i­dence de­rived from the struc­ture of his ar­gu­ments.)

In your lat­est se­quence ar­ti­cle, you de­scribed the great re­duc­tion­ist the­sis as “the propo­si­tion that ev­ery­thing mean­ingful can be ex­pressed this way [i.e. physics and/​or logic] even­tu­ally.”

Another LWer and I are in a de­bate over your in­ten­tion here. One of us thinks that you must mean “ev­ery­thing true (and not nec­es­sar­ily ev­ery­thing false) can be ex­pressed this way”

The other thinks you mean “ev­ery­thing true and ev­ery­thing false (i.e. ev­ery­thing mean­ingful) can be ex­pressed this way”.

Can you clear this up for us?

EY replied:

Every­thing true and most mean­ingful false state­ments can be ex­pressed this way. Suffi­ciently con­fused ver­bal state­ments may have no trans­la­tion, even as a set of log­i­cal ax­ioms pos­sess­ing no model, yet still be op­er­a­ble as slo­gans. I.e. “Like all mem­bers of my tribe, I firmly be­lieve that clams up with­out no finger in­side plus plus claims in the clams with­out no finger!”

So I replied:

So, just to be su­per clear (since I’m now los­ing this ar­gu­ment) you mean that there are state­ments that are both mean­ingful and false, but are not ex­press­ible in the terms you de­scribe in Log­i­cal Pin­point­ing, Causal Refer­ence, and Mixed Refer­ence?

And he said:

Nope. That state­ment is mean­ingless is false.

So I’m ac­tu­ally not much less con­fused. His first re­ply seems to sup­port GRTt. His sec­ond re­ply (the first word of it any­way) seems to sup­port GRTm. Thoughts?

I think “Every­thing true and most mean­ingful false state­ments can be ex­pressed this way.” is al­most com­pletely clear. Un­less a per­son is be­ing de­liber­ately am­bigu­ous, say­ing “most P are Q” in or­di­nary English con­ver­sa­tion has the im­pli­ca­ture “some P aren’t Q.”

I’m not even clear on what the gram­mar of “That state­ment is mean­ingless is false.” is, much less the mean­ing, so I can’t com­ment on that state­ment. I’m also not clear on how broad “the terms you de­scribe in Log­i­cal Pin­point­ing, Causal Refer­ence, and Mixed Refer­ence” are; he may think that he’s sketched mean­ingful­ness crite­ria some­where in those ar­ti­cles that are more in­clu­sive than “The Great Re­duc­tion­ist Pro­ject” it­self al­lows.

I’m also not clear on how broad “the terms you de­scribe in Log­i­cal Pin­point­ing, Causal Refer­ence, and Mixed Refer­ence” are; he may think that he’s sketched mean­ingful­ness crite­ria some­where in those ar­ti­cles that are more in­clu­sive than “The Great Re­duc­tion­ist Pro­ject” it­self al­lows.

I think that was fairly clear. Each of those ar­ti­cles is ex­plic­itly about a form of refer­ence sen­tences can have: log­i­cal, phys­i­cal, or logi­co­phys­i­cal, and his state­ment of the GRT was just that all mean­ingful (or in your read­ing, true) things can be ex­pressed in these ways.

But it oc­curs to me that we can file some­thing away, and to­mor­row I’m go­ing to read over your last three or four replies and think about the GRTt whether or not it’s EY’s view. That is, we can agree that the GRTm view is not a ten­able the­sis as we un­der­stand it.

One pos­si­ble source of con­fu­sion: What is the mean­ing of the qual­ifier “phys­i­cal”? “Phys­i­cal,” “causal,” “ver­ifi­able,” and “taboo-able/​an­a­lyz­able” all have differ­ent senses, and it’s pos­si­ble that for some of them Eliezer is more will­ing to al­low mean­ingful false­hoods than for oth­ers.

Yeah. I’ll re-read his posts, too. In all like­li­hood I didn’t even think about the am­bi­guity of some of his state­ments, be­cause I was in­ter­pret­ing ev­ery­thing in light of my pet the­ory that he sub­scribes to GRTt. I think he does sub­scribe to GRTt, but I may have missed some im­por­tant pos­i­tivis­tic views of his if I was only fo­cus­ing on the pro­ject of his he likes. Some of the state­ments you cited where he dis­cusses ‘mean­ing’ do cre­ate a ten­sion with GRTt.

You’d just about con­vinced me, un­til I reread the OP and found it con­sis­tently and un­equiv­o­cally dis­cussing the ques­tion of mean­ingful­ness. So be­fore we go on, I’m just go­ing to PM Eliezer and ask him what he meant. I’ll let you know what he says if he replies.

From the logic point of view, coun­ter­fac­tu­als are un­prob­le­matic, in that I can prove con­sis­tency of my fa­vorite coun­ter­fac­tual logic by ex­hibit­ing a model. Then as far as a lo­gi­cian is con­cerned, we are done: our coun­ter­fac­tual wor­lds live in the math­e­mat­i­cal struc­ture of the ex­hibited model.

From the com­puter sci­ence point of view a lit­tle more is re­quired, but as luck would have it, we can im­ple­ment coun­ter­fac­tu­als in some causal mod­els. If your causal model is an ac­tual cir­cuit, then not only is it perfectly mean­ingful to ask “the out­put of the cir­cuit is 1, what would be the out­put if I changed gate_0212 from OR to AND?” but it is pos­si­ble to im­ple­ment the coun­ter­fac­tual di­rectly, and check. This is be­cause we know enough about the causal model to en­sure coun­ter­fac­tual in­var­i­ance (e.g. other gates do not change). Peo­ple use this kind of coun­ter­fac­tual rea­son­ing to de­bug pro­grams and cir­cuits all the time! So from the “comp. sci” point of view, coun­ter­fac­tu­als are un­prob­le­matic. The coun­ter­fac­tual uni­verse “ex­ists” in the op­er­a­tional sense of us hav­ing an effec­tive pro­ce­dure to get us there.

The prob­lem arises when you are try­ing to deal with rel­a­tively poorly defined prob­lems, like say prob­lems in statis­tics or ma­chine learn­ing in­volv­ing mea­sure­ments of hu­man pop­u­la­tions or vi­tals in a pa­tient with a ton of un­cer­tainty about func­tional mechanisms and their in­var­i­ance. Ac­tu­ally even in that case, peo­ple try to con­struct effec­tive pro­ce­dures to reach coun­ter­fac­tual uni­verses, or some­thing close (see, e.g. Imai’s pa­per: http://​​imai.prince­ton.edu/​​re­search/​​De­sign.html). The ques­tion is then the fol­low­ing. Do coun­ter­fac­tual wor­lds in this case:

(a) not ex­ist (on­tolog­i­cal prob­lem).

(b) ex­ist, but we do not have a one to one map­ping from the in­for­ma­tion we have to a unique coun­ter­fac­tual world de­scribing the ques­tion we are in­ter­ested in, even in prin­ci­ple (iden­ti­fi­ca­tion prob­lem).

(c) ex­ist, we do not have a one to one map­ping from the in­for­ma­tion we have to a unique coun­ter­fac­tual world de­scribing the ques­tion we are in­ter­ested in, but we can get such a map­ping if we learn a LOT more about the prob­lem, and ob­serve many many more vari­ables (ig­no­rance prob­lem).

Fur­ther to my other com­ment, how would one define a coun­ter­fac­tual in the Game of Life? Surely we should be able to an­a­lyze this sim­ple case first if we want to talk about coun­ter­fac­tu­als in the “real world”?

Say we have a blank grid. It would be rea­son­able to say “if this blank grid had a glider, the glider would move up and left” even if there is no ac­tual glider on the grid. You can still make a men­tal model of what would hap­pen in a changed grid, even if that grid isn’t in­stan­ti­ated. I chose the ex­am­ple of a glider to show that you don’t ac­tu­ally have to run a step-by-step simu­la­tion of the grid to pre­dict be­hav­ior and thus em­pha­size that a coun­ter­fac­tual is a men­tal model, not an ac­tual uni­verse. Coun­ter­fac­tu­als re­quire a uni­verse and a model that is iso­mor­phic to that uni­verse in some way, but the iso­mor­phism doesn’t have to be perfect.

I like this ex­am­ple, and it counts as a coun­ter­fac­tual in our uni­verse, where there is no ac­tual glider drawn on an ac­tual blank grid, but I am not sure it would count as a coun­ter­fac­tual in a GoL uni­verse, un­less you define such a uni­verse to con­tain only a sin­gle blank can­vas and noth­ing else.

So what you’re say­ing is that if we did define such a uni­verse to con­tain only a sin­gle blank can­vas and noth­ing else, our in­ter­nal model of a grid with a glider would be a good ex­am­ple of a coun­ter­fac­tual?

I am try­ing to nail the defi­ni­tion of a coun­ter­fac­tual in a GoL uni­verse. Clearly, if you define this uni­verse as a blank can­vas, ev­ery game is a coun­ter­fac­tual. How­ever, if the GoL uni­verse is a col­lec­tion of all pos­si­ble games (hello, Teg­mark!!), then there are no coun­ter­fac­tu­als of the type you de­scribe in it. How­ever, what army1987 sug­gested would prob­a­bly still count as a coun­ter­fac­tual: given a re­al­iza­tion of a game and a cer­tain po­si­tion in it, find whether an­other re­al­iza­tion, with an ex­tra glider, con­verges to the same po­si­tion. The coun­ter­fac­tu­al­ness there comes from priv­ileg­ing one game from the lot, not from map­ping it to our uni­verse.

What you sug­gest is one type of a coun­ter­fac­tual: change the state. Eras­ing a glider is, of course, ille­gal un­der the rules of the game, so to make it a le­gal game, you have to trace it back­wards from the new state, or else you are not talk­ing about the GoL any­more. This cre­ates an in­ter­est­ing aside.

Like the real life, the Game of Life is not well-posed when run back­wards: in­finitely many con­figu­ra­tions are le­gal just one simu­la­tion step back from a given one. This is be­cause ob­jects in the Game can die with­out a trace, and so can ap­pear with­out a cause when run back­ward. This is similar to the way the world ap­pears to us macro­scop­i­cally: there is no way to tell the origi­nal shape of a drop of ink af­ter it is dis­solved in a bucket of wa­ter. This situ­a­tion is known as the re­versibil­ity prob­lem in cel­lu­lar au­tomata.

This free­dom to cre­ate life out of noth­ing when simu­lat­ing GoL back­wards does not help us, how­ever, in con­struct­ing the same start­ing con­figu­ra­tion as the one with the glider not erased, be­cause GoL is de­ter­minis­tic in the for­ward di­rec­tion, and you can­not ar­rive at two differ­ent con­figu­ra­tions when start­ing from the same one. But it does let us an­swer the fol­low­ing hy­po­thet­i­cal: would adding a glider have made a differ­ence in the fu­ture? I.e. would the glider in ques­tion col­lide with an­other ob­ject and dis­in­te­grate with­out a trace af­ter sev­eral turns?

This “but­terfly effect” in­ves­ti­ga­tion is triv­ial in the GoL and similar ir­re­versible au­tomata with sim­ple rules, but it is quite sug­ges­tive if we con­sider the origi­nal ques­tion:

We can liken Oswald to your glider and see of re­mov­ing it from the simu­la­tion (“coun­ter­fac­tual surgery”) still re­sults in the same fi­nal con­figu­ra­tion (JFK shot). If so, we can de­clare the above state­ment to be “true”, though not in the same sense as “Oswald shot JFK” is true, but in the same sense as a proved the­o­rem is “true”: its state­ment fol­lows from its premises.

I am find­ing the same prob­lem with all ar­ti­cles in this se­quence that I find with the ex­pla­na­tion of Bayes’ The­o­rem on Yud­kowsky’s main site. There are parts that seem so blind­ingly ob­vi­ous they don’t bear men­tion­ing.

Yet soon there­after, all of a sud­den, I find my­self com­pletely lost. I can un­der­stand parts of the text sep­a­rately, but can’t link them to­gether. I don’t see where it comes from, where it’s go­ing, what prob­lems it’s ad­dress­ing. I find it es­pe­cially difficult to re­late the illus­tra­tions to what’s go­ing on in the text.

I sel­dom have had this prob­lem with the blog posts from the clas­si­cal se­quences (with some ex­cep­tions, such as his quan­tum physics se­quence, which left me similarly con­fused).

Am I the only one who feels this way?

EDIT: upon re­flec­tion, this phe­nomenon, of feel­ing like there was a sud­den, im­per­cep­ti­ble jump from the bor­ingly ob­vi­ous to the ut­terly con­fus­ing, I’ve already ex­pe­rienced it be­fore: in col­lege, many les­sons would fol­low this pat­tern, and it would take in­ten­sive study to figure out the steps the pro­fes­sor mer­rily jumped be­tween what is, to them, two cat­e­gories of the set of blind­ingly ob­vi­ous things they already know and need to ex­plain again. Maybe there’s some sort of pat­tern there?

This is a prob­lem known as “bad writ­ing” which I con­tinue to strug­gle with, even af­ter many years. Can you list the first part where you felt lost? Some­where be­tween there and the pre­vi­ous part, I must have skipped some­thing.

I do hope peo­ple ap­pre­ci­ate that all the “blind­ingly ob­vi­ous” parts are parts where (at least in my guessti­ma­tion, and of­ten in my ac­tual ex­pe­rience) some­body else would oth­er­wise get lost. The “ob­vi­ous” is not the same for all peo­ple.

I would tell you about it, but now I’m afraid I’m dis­tract­ing you from the lat­est chap­ter in Meth­ods, which is kind of over­due and ea­gerly ex­pected (and half of a Na No Wri Mo novel’s word­count? what ex­actly have you been up to?). I swear I’ll take the time to go through the se­quence and iden­tify and point out the points at which I got lost, but first I’ll wait for you to pub­lish that chap­ter.

And yes, I know that one per­son’s ob­vi­ous is an­other’s opaque; af­ter all, that is the very root of this very prob­lem.

@Don­vot­ers: I am gen­uinely sorry; I’m just be­ing hon­est here. This is like be­ing ad­dicted to a drug and, af­ter months of wait­ing, hear­ing that the next batch is im­mi­nent and huge. I’m sort of fret­ting right now, and I’m prob­a­bly not the only one.

So, first of all, I’m go­ing to com­plain that do­ing this was a pain in the neck, and that com­ment­ing/​edit­ing would be much eas­ier on Gdocs or on some similar ap­pli­ca­tion. In fact, I used Gdocs to write this, be­cause do­ing so on the LW in­ter­face would have been in­tol­er­able. Still, there you are;

“A sin­gle dis­crete el­e­ment of fun­da­men­tal physics”

I sup­pose you mean an “el­e­men­tary par­ti­cle”? Took me a sec­ond to get it; it’s not the stan­dard ex­pres­sion.

differ­ent low-level phys­i­cal states are in­side or out­side the men­tal image of “some ap­ples on the table” or al­ter­na­tively “a kit­ten on the table”

I found this frankly mis­lead­ing. When you say “men­tal image”, I think of an ac­tual vi­su­al­iza­tion, which is not a cat­e­gory a “low-level phys­i­cal state” can be­long to (or be “in­isde of”). “Men­tal con­figu­ra­tion” or “men­tal ar­range­ment” might be more ap­pro­pri­ate, and “cor­re­spond­ing” or “not cor­re­spond­ing” sound more ac­cept­able. How­ever, I’d rephrase the en­tire thing differ­ently, as “differ­ent low-level phys­i­cal states whose ob­ser­va­tion would re­sult in a men­tal image of some ap­ples on the table or a kit­ten on the table”.

The pic­ture un­der­neath is con­fus­ing be­cause the pre­vi­ous para­graph makes us ex­pect a “brain” or a “head” “vi­su­al­iz­ing” the “high states”, not the “high states” be­ing some­how (one is func­tion of the other, a cor­re­spon­dence? iden­ti­fi­ca­tion? be­long­ing) linked to the “this ac­tual uni­verse in all its low-level glory” pic­ture. I also find the choice of fuzzi­ness around the edges of pic­ture frag­ments, and the use of dot­ted lines, to be rather jar­ring. Is it sup­posed to be cute? Be­cause what it con­veys to me is “we’re not sure” and “the con­cept is un­clear” and “the cor­re­spon­dence is dis­tant or un­cer­tain”, and that con­trasts strongly with the ac­tual text, which is much more rigor­ous. At the very least, you may want the line from “the Uni­verse” to “all pos­si­ble wor­lds” to end in a thicker dot, and to dis­tort the shape of “all the pos­si­ble wor­lds that would re­sult in “a bunch of ap­ples on the table” (that’s what the dot­ted cir­cle means, right?) to be big­ger and more potato-shaped or some­thing, as is tra­di­tional to de­note “ab­stract set of stuff whose shape doesn’t mat­ter”; a cir­cle seems too reg­u­lar, and, in fact, I origi­nally thought it rep­re­sented a point, not a set. Its shape should also be differ­ent from the shape of the “we ob­serve that a cat is on the table” set of pos­si­ble uni­verses, so as not to im­ply any re­la­tion­ship be­tween the two.

but I’m not go­ing to draw the image for that one. (We tried, and it came out too crowded.)

Did you need to men­tion that? Every time I read it, I get dis­tracted wan­der­ing what it would have looked like. Per­haps it would be bet­ter to make the pic­ture, and to hell with crowd­ed­ness.

Con­strain­ing this out­put con­strains the pos­si­ble states of the origi­nal, phys­i­cal in­put uni­verse:

On the pic­ture next, I would have put the points of the ar­rows in the other di­rec­tion, since that’s the di­rec­tion of the causal­ity link; uni­verse-ob­ser­va­tion-model-calcu­la­tion-six.

fulfilled by a mix­ture of phys­i­cal re­al­ity and log­i­cal validity

“Mix­ture” sounds a lit­tle too an­ar­chic, it con­fused me for a while. Doesn’t “phys­i­cal re­al­ity” come be­fore “log­i­cal val­idity”? What do you think of “com­po­si­tion” in­stead? It im­plies an or­der, that one is com­pounded over the other. “Com­bi­na­tion”, which you used later, seems good too.

“run­ning a log­i­cal func­tion over the phys­i­cal uni­verse”

Sounds like an abuse of lan­guage. Wouldn’t length­en­ing it to “run­ning a log­i­cal func­tion over a model of the phys­i­cal uni­verse” or “run­ning a log­i­cal func­tion over an ob­ser­va­tion of the phys­i­cal uni­verse” be a good trade­off?

(I haven’t had time to go into this last part but it’s an already-pop­u­lar idea in philos­o­phy of com­pu­ta­tion.)

I got dis­tracted again.

And the Great Re­duc­tion­ist Th­e­sis can be seen as the propo­si­tion that ev­ery­thing mean­ingful can be ex­pressed this way even­tu­ally.

Is it true then, that “The GRT defines ‘mean­ingful’ as equiv­a­lent to ‘can be ex­pressed this way’, and thus pos­tu­lates that things that can­not be ex­pressed this way are mean­ingless?” How do we avoid Wittgen­steinon­sense?

self-sensitization

? You mean be­com­ing sen­si­tive to one’s own state of mind? “I no­tice that I am con­fused”?

un­less you be­lieve the Illu­mi­nati planned it all

How about the more im­par­tial (and fac­tual, and log­i­cal) “un­less you don’t be­lieve LHO acted by him­self”? It seems un­fair to pro­mote to at­ten­tion, of all the vast field of hy­pothe­ses, a Bavar­ian or­ga­ni­za­tion that seems to have been ended circa 1787. You should avoid mak­ing jokes that will make many laugh at the ex­pense of piss­ing off oth­ers; it’s kind of a ter­rible PR strat­egy.

For the record, I don’t “be­lieve” in any spe­cific con­spir­acy the­ory, and I as­sign high­est prob­a­bil­ity to the “lone nut­ter” chain of events, but I as­sign the “not a lone nut­ter” set of hy­pothe­ses a prob­a­bil­ity that is sig­nifi­cantly above zero; I don’t pre­sume to pro­mote to at­ten­tion any par­tic­u­lar hy­poth­e­sis of that set with the ev­i­dence cur­rently available to the pub­lic. If this po­si­tion de­serves mock­ery, I would like to know it. If it doesn’t, I would like peo­ple to stop act­ing as if the only op­tions were “ac­cept the stan­dard ver­sion and only the stan­dard ver­sion” or “choose one elab­o­rate con­spir­acy the­ory and stick to it in the face of all ev­i­dence (or lack thereof)”.

For in­stance, about the moon land­ing; if you want to use a fact that is caused by Kennedy’s elec­tion and which wouldn’t have hap­pened oth­er­wise, how about “Mon­roe Cake” in­stead, which isn’t a pot­shot at any­one? And yes, I be­lieve there was a moon land­ing, in the ex­act way the tale was offi­cially told, un­til and un­less I’m pre­sented with suffi­cient ev­i­dence of the con­trary, which hasn’t hap­pened yet and which I don’t an­ti­ci­pate hap­pen­ing. I just don’t en­dorse an­tag­o­niz­ing peo­ple, or oth­er­wise rais­ing ten­sions, un­less you have to.

a nice neigh­bor­hood-structure

?

do not in fact ac­tu­ally ex­ist.

I thought many-wor­lds im­plied they did ex­ist “some­where”?

And the same law could’ve just as eas­ily have said that you’re likely to find your­self in a world that goes over the in­te­gral of mod­u­lus to the power 1.99999?

Con­sider us­ing paren­the­sis in­stead of a comma; I had to back­track at the sec­ond semi­colon, hav­ing thought that it was the sec­ond kind of stuff (and then re­mem­ber­ing that, had the list items been sep­a­rated by com­mas, you’d have used a colon and not a semi­colon).

Why don’t you use bul­let points and num­bered lists more of­ten? They’d make read­ing less fluid, but they’d also make some of your para­graphs much clearer, I think.

mag­i­cal-re­al­ity-fluid

A bit of a dis­tract­ing con­cept. How about the Pratch­ett for­mu­la­tion in­stead: thing­ness? It’s et­y­molog­i­cally cor­rect, and quite evoca­tive.

This is just the same sort of prob­lem if you say that causal mod­els are mean­ingful and true rel­a­tive to a mix­ture of three kinds of stuff, ac­tual wor­lds, log­i­cal val­idi­ties, and coun­ter­fac­tu­als, and log­i­cal validities

I always thought ‘qualia’ was sin­gu­lar… Still, a link to Wikipe­dia would not be un­wel­come; I’m hav­ing trou­ble pars­ing the sen­tence. “build refer­ences”? You seem to im­ply that they’re wrong for do­ing so, yet don’t seem to ex­plicit why.

The whole para­graph on the An­thropic Trilemma has left me con­fuz­zled. Then I clicked the link, saw the lengthy ar­ti­cle, and thought “not to­day”. Maybe it would be benefi­cial to put a header/​ab­stract/​sum­mary on top of your old se­quences ar­ti­cles, for those of us who want to re­vise the old stuff but don’t want to have to read the whole thing all over again.

And -alas- the para­graph on mod­ern philos­o­phy ul­ti­mately leaves me with noth­ing other than “EY thinks mod­ern philos­o­phy is do­ing stuff that seems ob­vi­ously stupid or half-baked”. Not the sort of thing you should do lightly; a link to some­thing more de­vel­oped would be good.

This is con­fus­ing the pro­ject of get­ting the gnomes out of the haunted mine, with try­ing to un­make the rain­bow.

When read­ing your work, I of­ten share the feel­ing that Ri­talin just de­scribed. In this par­tic­u­lar in­stance, I was with you up un­til you started talk­ing about the Born prob­a­bil­ities and then I just felt to­tally lost.

Yes, I knew about them. I try to shorten them it in ev­ery­thing I do, from my vo­cab­u­lary reg­ister to the con­cepts I use, which I try to make as rent-pay­ing and em­piri­cal as pos­si­ble. It’s heav­ier work than I fore­saw.

This has moved me from “im­pos­si­ble-to-un­der­stand nerd who talks down to you from an im­pen­e­tra­ble ivory tower” to “that creepy guy who talks in punches and has strange ideas that make sense”. Or, if you will, from a Shel­don Cooper to a cool­ness-im­paired Tyler Dur­den. So­cially, it wasn’t a big gain.

The great thing about talk­ing with some­one in per­son (or at least, in real-time one-to-one con­ver­sa­tions) is that you can first as­sess how large the in­fer­en­tial dis­tance is, e.g. “What are you work­ing on?” “Cos­mic rays. Do you know what cos­mic rays are?” “No.” “Do you know what sub­atomic par­ti­cles are?” “No.” “Do you know what an atom is?” “Yes.”

You just have to hope they won’t Wheatley they way around your ques­tions and try to feign un­der­stand­ing things they don’t, treat­ing knowl­edge like a sta­tus game. That can re­ally put a damper on mean­ingful com­mu­ni­ca­tion.

I don’t think that ever hap­pened to me—at worst, they in­cor­rectly be­lieved that the un­der­stand­ing they had got from pop­u­lariza­tions was ac­cu­rate. But pretty much ev­ery­body at some point ad­mits “I wish I could un­der­stand ev­ery­thing of that, but that sounds cool”, ex­cept peo­ple who ac­tu­ally un­der­stand (as ev­i­denced by the fact that they ask ques­tions too rele­vant for them to be just par­rot­ing stuff to hide ig­no­rance).

(I guess the kind of peo­ple who treat ev­ery­thing like a sta­tus game would con­sider knowl­edge about sci­ency top­ics to be nerdy and there­fore un­cool.)

One way to treat knowl­edge like a sta­tus game is to be a “sci­ence fan.” This is a game you play with other “sci­ence fans,” and you win by know­ing more “mind-blow­ing facts” about sci­ence than other peo­ple. It is pop­u­lar on Quora.

Ah, yes, the math­e­mat­i­cian’s dou­ble take. One should be wary of those, es­pe­cially at a high level; when an el­der math­e­mat­i­cian wants to skip in­fer­en­tial steps for the sake of ex­pe­di­ency, there’s a chance that “then a mir­a­cle oc­curs” is some­where in that mess of a black­board.

In fact, the whole point of hav­ing a younger chevruta is so that they can point out that kind of de­tails the big­ger, more in­fer­en­tially-dis­tant minds might ac­ci­den­tally gloss over. They’re like the great writer’s spell-checker. Or like the com­ment sec­tion for Yud­kowsky’s blog posts.

Jok­ing aside, I was ac­tu­ally won­der­ing if oth­ers here felt the same way as I about EY’s lat­est se­quence of posts.

Yeah, with “atoms of three­ness” Eliezer seems to have nar­rowly missed an in­ter­est­ing point. Mul­ti­ply­ing ap­ples to get square ap­ples makes no sense, but if we’d di­vided them in­stead, we’d no­tice that the uni­verse con­tains di­men­sion­less con­stants—if the uni­verse can be said to “con­tain” any­thing at all, like atoms or ve­loc­ity.

In­ci­den­tally, I’d give a prob­a­bil­ity of about 0.1 to the state­ment “If Lee Har­vey Oswald hadn’t shot John F. Kennedy, some­one else would have”—there have been many peo­ple who have tried to as­sas­si­nate Pres­i­dents.

I guess this is my main is­sue with the whole se­quence. No way to set­tle a wa­ger means in my mind that there is no way to as­cer­tain the truth of a state­ment, no mat­ter how much physics, math and logic you throw at it.

EDIT: Try­ing to steel-man the game of coun­ter­fac­tu­als: One way to set­tle the wa­ger would be to run a simu­la­tion of the world as is, watch the as­sas­si­na­tion hap­pen in ev­ery run, then do a tiny change which leads to no mea­surable large-scale effects (no-but­terflies con­di­tion), ex­cept “Lee Har­vey Oswald hadn’t shot John F. Kennedy”.

But what does “Lee Har­vey Oswald hadn’t shot John F. Kennedy” mean, ex­actly? He missed? Kennedy took a differ­ent route? Oswald grew up to be an up­stand­ing cit­i­zen?

One can imag­ine a whole spec­trum of pos­si­ble coun­ter­fac­tual Kennedy-lives (KL) wor­lds, some of which are very similar to ours up to the day of the shoot­ing, and oth­ers not so much. What prop­er­ties of this spec­trum would con­sti­tute a win­ning wa­ger? Would you go for “ev­ery KL world has to be oth­er­wise in­dis­t­in­guish­able (by what crite­ria? Me­dia head­lines?) from ours”? Or “there is at least one KL world like that”? Or some­thing in be­tween? Or some­thing to­tally differ­ent?

Un­til one drill down and set­tles the defi­ni­tion of a coun­ter­fac­tual, prob­a­bly in a way similar to the above, I see no way to mean­ingfully dis­cuss the is­sue.

That’s the point of this post. Only causal mod­els can be set­tled. Coun­ter­fac­tu­als can­not be ob­served, and can only be de­rived as log­i­cal con­structs via ax­io­matic speci­fi­ca­tion from the causal mod­els which can be ob­served.

As faul_sname said be­low, one way to set­tle the wa­ger—and I mean an ac­tual wa­ger in our cur­rent world, where we don’t have ac­cess to Or­a­cle AIs—would be to ag­gre­gate his­tor­i­cal data about pres­i­den­tial as­sas­si­na­tions in gen­eral, and as­sas­si­na­tion at­tempts on Kennedys in par­tic­u­lar, and build a model out of them.

We could then say, “Ok, there’s a 82% chance that, in the ab­sence of Oswald, some­one would’ve tried to as­sas­si­nate Kennedy, and there’s a 63% chance that this at­tempt would’ve suc­ceeded, so there’s about a 52% chance that some­one would’ve kil­led Kennedy af­ter all, and thus you owe me about half of the prize money”.

...which would be set­tling a wa­ger about the causal model that you built. The closer your causal model comes to ac­cu­rately re­flect­ing the “coun­ter­fac­tual world” that it is sup­posed to re­fer or cor­re­spond to, the more it ac­tu­ally in­stan­ti­ates that world. (Ex­cept that by perform­ing coun­ter­fac­tual surgery, you have in­serted your­self into the causal mini-uni­verse that you’ve built.) The “coun­ter­fac­tual” stops be­ing counter, and starts be­ing fac­tual.

A coun­ter­fac­tual world doesn’t ex­ist (I think?), whereas your model does. If your model is a full-blown Planck-scale-de­tailed simu­la­tion of a uni­verse, then it is a phys­i­cal thing which fits very well your log­i­cal de­scrip­tion of a coun­ter­fac­tual world. E.g., if you make a perfect simu­la­tion of a uni­verse with the same laws of physics as ours, but where you sur­gi­cally al­ter it so that Oswald misses, then you have built an “ac­cu­rate” model of that coun­ter­fac­tual—that is, one of the many mod­els that satisfy the (quasi-)log­i­cal de­scrip­tion, “Every­thing is the same ex­cept Oswald didn’t kill Kennedy”.

A model is closer to the coun­ter­fac­tual when the model bet­ter satis­fies the con­di­tions of the coun­ter­fac­tual. A statis­ti­cal model of the sort we use to­day can be very effec­tive in limited do­mains, but it is a mil­lion miles away from ac­tu­ally satis­fy­ing con­di­tions of a coun­ter­fac­tual uni­verse. For ex­am­ple, con­sider Eliezer’s di­a­gram for the “Oswald didn’t kill Kennedy” model. It uses the im­pres­sive, mod­ern math of con­di­tional prob­a­bil­ity—but it has five nodes. I would ven­ture to guess that our uni­verse has more than five nodes, so the model does not fit the de­scrip­tion “a great big causal uni­verse in all its glory, but where Oswald didn’t kill Kennedy”.

More re­al­is­ti­cally:

We col­lect some med­i­cal data from the per­son [who wants to buy can­cer in­surance from us], feed it into our statis­ti­cal model (which has been trained on a large num­ber of past cases), and it tells us, “there’s a 52% chance this per­son will de­velop can­cer in the next 20 years”. Now we can quote him a rea­son­able price.

Our model might have mil­lions of “neu­rons” in a net, or mil­lions of nodes in a PGM, or mil­lions of fea­ture pa­ram­e­ters for re­gres­sion… but that is nowhere near the com­plex­ity con­tained in .1% of one mil­lionth of the pinky toe of the per­son we are sup­pos­edly mod­el­ling. It works out nicely for us be­cause we only want to ask our model a few high-level ques­tions, and be­cause we snuck in a whole bunch of com­pu­ta­tion, e.g. when we used our vi­sual cor­tex to read the in­stru­ment that mea­sures the pa­tient’s blood pres­sure. But our model is not ac­cu­rate in an ab­solute sense.

This last ex­am­ple is a model of an­other phys­i­cal sys­tem. The Oswald ex­am­ple is sup­posed to model a coun­ter­fac­tual. Or ac­tu­ally, to put it bet­ter: a model doesn’t de­scribe a coun­ter­fac­tual, a coun­ter­fac­tual de­scribes a model.

Let’s say that, in­stead of can­cer in­surance, our imag­i­nary in­surance com­pany was sel­l­ing as­sas­si­na­tion in­surance. A poli­ti­cian would come to us; we’d feed what we know about him into our model; and we’d quote him a price based on the prob­a­bil­ity that he’d be as­sas­si­nated.

Are you say­ing that such a feat can­not re­al­is­ti­cally be ac­com­plished ? If so, what’s the differ­ence be­tween this and can­cer in­surance ? After all, “how likely is this guy to get kil­led” is also a “high-level ques­tion”, just as “how likely is this guy to get can­cer”—isn’t it ?

Some­one could re­al­is­ti­cally pre­dict whether or not you will be as­sas­si­nated, with high con­fi­dence, us­ing (per­haps much larger) ver­sions of mod­ern statis­ti­cal com­pu­ta­tions.

To do so, they would not need to con­struct any­thing so elab­o­rate as a com­pu­ta­tion that con­sti­tutes a chunk of a full blown causal uni­verse. They could ig­nore quarks and such, and still be pretty ac­cu­rate.

Such a model would not re­fer to a real thing, called a “coun­ter­fac­tual world”, which is a causal uni­verse like ours but with some changes. Such a thing doesn’t ex­ist any­where.

...un­less we make it ex­ist by perform­ing a com­pu­ta­tion with all the causal­ity-struc­ture of our uni­verse, but which has tweaks ac­cord­ing to what we are test­ing. This is what I meant by a more ac­cu­rate model.

All right, that was much clearer, thanks ! But then, why do we care about a “coun­ter­fac­tual world” at all ?

My im­pres­sion was that Eliezer claimed that we need a coun­ter­fac­tual world in or­der to eval­u­ate coun­ter­fac­tu­als. But I ar­gue that this is not true; for ex­am­ple, we could ask our model “what are my chances of get­ting can­cer ?” just as eas­ily as “what are my chances of get­ting can­cer if I stop smok­ing right now ?”, and get use­ful an­swers back—with­out con­struct­ing any al­ter­nate re­al­ities. So why do we need to worry about a fully-re­al­ized coun­ter­fac­tual uni­verse ?

Ex­actly. We don’t. There are only real mod­els, and log­i­cal de­scrip­tions of mod­els. Some of those de­scrip­tions are of the form “our uni­verse, but with tweak X”, which are “coun­ter­fac­tu­als”. The prob­lem is that when our brains do coun­ter­fac­tual mod­el­ing, it feels very similar to when we are just do­ing ac­tual-world mod­el­ing. Hence the sen­sa­tion that there is some ac­tual world which is like the coun­ter­fac­tual-type model we are us­ing.

My im­pres­sion was that Eliezer went much farther than that, and claimed that in or­der to do coun­ter­fac­tual mod­el­ing at all, we’d have to cre­ate an en­tire coun­ter­fac­tual world, or else our mod­els won’t make sense. This is differ­ent from say­ing, “our brains don’t work right, so we’ve got to watch out for that”.

The closer your causal model comes to ac­cu­rately re­flect­ing the “coun­ter­fac­tual world” that it is sup­posed to re­fer or cor­re­spond to...

I’m not sure I un­der­stand this state­ment. For­get Oswald for a mo­ment, and let’s imag­ine we’re work­ing at an in­surance com­pany. A per­son comes to us, and says, “sell me some can­cer in­surance”. This per­son is cur­rently does not have can­cer, but there’s a chance that he could de­velop can­cer in the fu­ture (let’s pre­tend there’s only one type of can­cer in the world, just for sim­plic­ity). We col­lect some med­i­cal data from the per­son, feed it into our statis­ti­cal model (which has been trained on a large num­ber of past cases), and it tells us, “there’s a 52% chance this per­son will de­velop can­cer in the next 20 years”. Now we can quote him a rea­son­able price.

How is this situ­a­tion differ­ent from the “kil­ling Kennedy” sce­nario ? We are still talk­ing about a coun­ter­fac­tual, since Kennedy is al­ive and our ap­pli­cant is can­cer-free.

You don’t have to con­struct the model at that level of de­tail to mean­ingfully dis­cuss the is­sue. Just look at the base rate of pres­i­den­tial as­sas­si­na­tions and up­date that to cover the large differ­ences with the Kennedy case. If you’re try­ing to simu­late a uni­verse with­out Lee Har­vey Oswald, you’re prob­a­bly overfit­ting, par­tic­u­larly if you’re a hu­man. Your in­ter­nal model of how Kennedy was ac­tu­ally shot doesn’t con­tain a high-fidelity of the world in which Oswald grew up and went through a se­ries of men­tal states that cul­mi­nated with him shoot­ing Kennedy (or at least, you’re not simu­lat­ing each men­tal state to come to the out­come). In­stead, you have a model of the world in which Lee Har­vey Oswald shoots JFK, and oth­er­wise doesn’t re­ally fac­tor into your model. While re­mov­ing Oswald from the real world would have large effects, re­mov­ing him from your model doesn’t.

I think that you ask “what are the chances that Kennedy would have been shot if Oswald hadn’t done it?” you’re prob­a­bly ask­ing some­thing along the lines of “If I build the best model I can of the world sur­round­ing that event, and re­move Oswald, does the model show Kennedy get­ting shot, and if so, with what con­fi­dence?” So in or­der to set­tle the wa­ger, you would have to con­struct a model of the world that both of you agreed made good enough pre­dic­tions (prob­a­bly by giv­ing it in­for­ma­tion about the state of so­ciety at var­i­ous times and see­ing how of­ten it pre­dicts a pres­i­den­tial as­sas­si­na­tion) and see­ing what the an­swer it spits out is. There might be a prob­lem of in­suffi­cient data, but it seems pretty clear to me that when we talk about coun­ter­fac­tu­als, we’re talk­ing about mod­els of the world that we al­ter, not ac­tual, ex­ist­ing wor­lds. If many wor­lds was false and there was only one, fully de­ter­minis­tic uni­verse (that con­tained hu­mans), we would still talk about coun­ter­fac­tu­als. Un­less I’m miss­ing some­thing ob­vi­ous.

Your in­ter­nal model of how Kennedy was ac­tu­ally shot doesn’t con­tain a high-fidelity of the world in which Oswald grew up and went through a se­ries of men­tal states that cul­mi­nated with him shoot­ing Kennedy

Well, my model has Oswald in the Marines with Kerry Thorn­ley — aka Lord Omar, of Dis­cor­dian leg­end — and a coun­ter­fac­tual in which a slightly more tripped-out con­ver­sa­tion be­tween the two would have led to Oswald be­com­ing an an­ar­chist in­stead of a Marx­ist; thus pre­vent­ing his defec­tion to the Soviet Union ….

(I have not yet en­coun­tered a claim to have finished Re­duc­ing an­throp­ics which (a) ends up with only two kinds of stuff and (b) does not seem to im­ply that I should ex­pect my ex­pe­riences to dis­solve into Boltz­mann-brain chaos in the next in­stant, given that if all this talk of ‘de­gree of re­al­ness’ is non­sense, there is no way to say that phys­i­cally-lawful copies of me are more com­mon than Boltz­mann brain copies of me.)

I think it was Vladimir Nesov who said some­thing like the fol­low­ing: An­ti­ci­pa­tion is just what it feels like when your brain has de­cided that it makes sense to pre-com­pute now what it will do if it has some par­tic­u­lar pos­si­ble fu­ture ex­pe­rience. You should ex­pect ex­pe­riences only if ex­pect­ing (i.e., think­ing about in ad­vance) those ex­pe­riences has greater ex­pected value than think­ing about other things.

On this view, which seems right to me, you shouldn’t ex­pect to dis­solve into Boltz­mann-brain chaos. This is be­cause you know that any la­bor that you ex­pend on that ex­pec­ta­tion will be to­tally wasted. If you find your­self start­ing to dis­solve, you won’t look back on your pre­sent self and think, “If only I’d thought in ad­vance about what to do in this situ­a­tion. I could have been pre­pared. I could be do­ing some­thing right now to im­prove my lot.”

Con­sider an analo­gous situ­a­tion. You’re strapped to a bed in a metal box, ut­terly im­mo­bi­lized and liv­ing a mis­er­able life. In­tra­venous tubes are keep­ing you al­ive. You know that you are pow­er­less to es­cape. In fact, you know that you are ab­solutely pow­er­less to make your life in here any bet­ter or worse. You know that, to­mor­row, your cap­tors will roll a mil­lion-sided die, with sides num­bered one to a mil­lion. If the die comes up “1”, you will be re­leased, free to make the best of your life in the wide-open world. Other­wise, if any other side of the die come up, you will re­main con­fined as you are now un­til you die. There will be no other chances for any change in your cir­cum­stances.

Clearly you are more likely to spend the rest of your life in the box. But should you spend any time an­ti­ci­pat­ing that? Of course not. What would be the point? You should spend all of you men­tal effort on figur­ing out the best thing to do if you are re­leased. Your ex­pected util­ity is max­i­mized by think­ing only about this sce­nario, even though it is very im­prob­a­ble. Even a sin­gle thought given to the al­ter­na­tive pos­si­bil­ity is a wasted thought. You should not an­ti­ci­pate con­fine­ment af­ter to­mor­row. You should not ex­pect to be con­fined af­ter to­mor­row. Th­ese men­tal ac­tivi­ties are max­i­mally bad op­tions for what you could be do­ing with your time right now.

You’ve just re­defined “ex­pect” so that the prob­lem goes away. For sure, there’s no point prac­ti­cally wor­ry­ing about out­comes that you can’t do any­thing about, but that doesn’t mean you shouldn’t ex­pect them. If you want to ar­gue that we should use differ­ent no­tion than “ex­pect”, or that the prac­ti­cal con­sid­er­a­tions show that the Boltz­mann-brain ar­gu­ment isn’t a prob­lem, that’s fine, but this has all the benefits of theft over hon­est toil.

I don’t be­lieve that there is any re­defi­ni­tion go­ing on here. I in­tend to use “ex­pect” in ex­actly the usual sense, which I take also to be the sense that Eliezer was us­ing when he wrote “I have not yet en­coun­tered a claim to have finished Re­duc­ing an­throp­ics which … does not seem to im­ply that I should ex­pect my ex­pe­riences to dis­solve into Boltz­mann-brain chaos in the next in­stant”.

Both he and I are refer­ring to a par­tic­u­lar men­tal ac­tivity, namely the ac­tivity that is nor­mally called “ex­pect­ing”. With re­gard to this very same ac­tivity, I am ad­dress­ing the ques­tion of whether one “should ex­pect [one’s] ex­pe­riences to dis­solve into Boltz­mann-brain chaos in the next in­stant”. (Em­pha­sis added.)

The po­ten­tially con­tro­ver­sial claim in my ar­gu­ment is not the defi­ni­tion of “ex­pect”. That defi­ni­tion is sup­posed to be ut­terly stan­dard. The con­tro­ver­sial claim is about when one ought to ex­pect. The “stan­dard view” is that one ought to ex­pect an event just when that event has a prob­a­bil­ity of hap­pen­ing that is greater than some thresh­old. To ar­gue against this view, I am point­ing to the fact that ex­pect­ing an event is a cer­tain men­tal act. Since it is an act, a proper jus­tifi­ca­tion for do­ing it should take into ac­count util­ities as well as prob­a­bil­ities. My claim is that, once one takes the rele­vant util­ities into ac­count, one eas­ily sees that one shouldn’t ex­pect one­self to dis­solve into Boltz­mann-brain chaos, even if that dis­solu­tion is over­whelm­ingly likely to hap­pen.

Ah, okay. You’re quite right then, I mis­di­ag­nosed what you were try­ing to do. I still think it’s wrong, though.

In par­tic­u­lar, I don’t think the “should” in that sen­tence works the way you’re claiming that it does. In con­text, “Should I ex­pect X?” seems equiv­a­lent to “Would I be cor­rect in ex­pect­ing X?” or some­such, rather than “Ought I (prac­ti­cally/​morally) to ex­pect X?”. English is not so well-be­haved as that. I guess it kind of looks like per­haps it’s an epistemic-ra­tio­nal­ity “should”, but I’m not sure it’s even that.

“Should I ex­pect X?” seems equiv­a­lent to “Would I be cor­rect in ex­pect­ing X?” or some­such...

Then my an­swer would be, Maybe you would be cor­rect. But why would this im­ply that an­throp­ics needs any ad­di­tional “re­duc­ing”, or that some­thing more than logic + physics is needed? It all still adds up to nor­mal­ity. You still make all the same de­ci­sions about what you should work to pro­tect or pre­vent, what you should think about and try to bring about, etc. All the same things need to be done with ex­actly the same ur­gency. Your allegedly im­pend­ing dis­solu­tion doesn’t change any of this.

Right. So, as I said, you are coun­sel­ling that “an­throp­ics” is prac­ti­cally not a prob­lem, as even if there is a sense of “ex­pect” in which it would be cor­rect to ex­pect the Boltz­mann-brain sce­nario, this is not worth wor­ry­ing about be­cause it will not af­fect our de­ci­sions.

That’s a perfectly rea­son­able thing to say, but it’s not ac­tu­ally ad­dress­ing the ques­tion of get­ting an­throp­ics right, and it’s mis­lead­ing to pre­sent it as such. You’re just say­ing that we shouldn’t care about this par­tic­u­lar bit of an­throp­ics. Doesn’t mean that I wouldn’t be cor­rect (or not) to ex­pect my im­pend­ing dis­solu­tion.

it’s not ac­tu­ally ad­dress­ing the ques­tion of get­ting an­throp­ics right, and it’s mis­lead­ing to pre­sent it as such.

I would have been “ad­dress­ing the ques­tion of get­ting an­throp­ics right” if I had talked about what the “I” in “I will dis­solve” means, or about how I should go about as­sign­ing a prob­a­bil­ity to that in­dex­i­cal-laden propo­si­tion. I don’t think that I pre­sented my­self as do­ing that.

I’m also not say­ing that I’ve solved these prob­lems, or that we shouldn’t work to­wards a gen­eral the­ory of an­throp­ics that an­swers them.

The use­less­ness of an­ti­ci­pat­ing that you will be a Boltz­mann brain is par­tic­u­lar to Boltz­mann-brain sce­nar­ios. It is not a fea­ture of an­thropic prob­lems in gen­eral. The Boltz­mann brain is, by hy­poth­e­sis, pow­er­less to do any­thing to change its cir­cum­stances. That is what makes an­ti­ci­pat­ing the sce­nario pointless. Most an­thropic sce­nar­ios aren’t like this, and so it is much more rea­son­able to won­der how you should al­lo­cate “an­ti­ci­pa­tion” to them.

The ques­tion of whether in­dex­i­cals like “I” should play a role in how we al­lo­cate our an­ti­ci­pa­tion — that ques­tion is open as far as I know.

My point was this. Eliezer seemed to be say­ing some­thing like, “If a the­ory of an­throp­ics re­duces an­throp­ics to physics+logic, then great. But if the the­ory does that at the cost of say­ing that I am prob­a­bly a Boltz­mann brain, then I con­sider that to be too high a price to pay. You’re go­ing to have to work harder than that to con­vince me that I’m re­ally and truly prob­a­bly a Boltz­mann brain.” I am say­ing that, even if a the­ory of an­throp­ics says that “I am prob­a­bly a Boltz­mann brain” (where the the­ory ex­plains what that “I” means), that is not a prob­lem for the the­ory. If the the­ory is oth­er­wise un­prob­le­matic, then I see no prob­lem at all.

It sounds like solv­ing a differ­ent prob­lem. Like I said, it’s fine to claim that we should use a differ­ent no­tion than the one that we do, but chang­ing it by fiat and then claiming there’s no prob­lem is not do­ing that.

I re­al­ize this is a small thing, but this es­say ap­pears to use “fact” to mean “a state­ment suffi­ciently well-formed to be ei­ther true or false” rather than “a state­ment which is true” and that kept dis­tract­ing me from its ac­tual point. Can some other word be found?

He is say­ing that that is a fact, but not merely be­cause it is “a state­ment suffi­ciently well-formed to be ei­ther true or false”. For ex­am­ple, he would say that “If Oswald hadn’t shot Kennedy, some­body else would’ve” is not a fact, even though it is equally well formed. The point of the ar­ti­cle is to ex­plain how some coun­ter­fac­tu­als can be facts while oth­ers are not.

I like how you frame this dis­cus­sion. At this stage, I’d like to see more LessWrongers spend­ing sleepless nights pon­der­ing how we want to rene­go­ti­ate our cor­re­spon­dence the­ory to keep our the­ory and jar­gon as clean and use­ful as pos­si­ble. Cal­ling or­di­nary as­ser­tions ‘true/​false’ and log­i­cal ones ‘valid/​in­valid’ isn’t satis­fac­tory. Not only does it prob­le­ma­tize mixed-refer­ence cases, but it also con­fus­ingly con­flates a prop­erty of struc­tured groups of as­ser­tions (ar­gu­ments, proofs, etc.) with a prop­erty of in­di­vi­d­ual as­ser­tions.

Our pro­to­type for ‘truth’ is that an as­sertive rep­re­sen­ta­tion co-oc­cur in re­al­ity with the rep­re­sented cir­cum­stance. Prob­lem dis­courses like ethics, alethic modal­ity, and pure math­e­mat­ics seem to de­vi­ate from the cor­re­spon­dence-the­ory pro­to­type be­cause our con­fi­dence in their truth or false­hood out­strips our con­fi­dence in any­thing cor­re­spond­ing to their se­man­tic con­tent. (For those with very sparse meta­phys­i­cal views, some­times called ‘noneists,’ the out­strip­ping is es­pe­cially se­vere.) Put sim­ply, al­though our ‘log­i­cal’ state­ments seem to de­pend on the world — at a min­i­mum, on our lin­guis­tic choices, our deriva­tion rules, etc. — they don’t seem to de­pend on there be­ing a literal wor­ldly cor­re­late for what they as­sert. Truth-con­di­tions and rep­re­sen­ta­tional con­tent come apart rad­i­cally.

Per­haps we should res­cue the cor­re­spon­dence the­ory by deny­ing that the cor­re­spon­dence is sim­ply a mat­ter of the as­serted cir­cum­stance ob­tain­ing? “The av­er­age Aus­tralian male is 5′9“.” is not true be­cause there ex­ists some ob­ject, the av­er­age Aus­tralian male, fal­ling un­der the ex­ten­sion of the pred­i­cate (or bear­ing the prop­erty) “be­ing 5′9””. It must be an­a­lyzed into a more com­plex phys­i­cal state­ment, or a case of mixed refer­ence. If dis­t­in­guish­ing purely phys­i­cal from mixed state­ments is difficult in many cases, as singling out the purely log­i­cal state­ments seems to be, then this gives us more prag­matic rea­son to re­lax our con­straints on truth-apt­ness and ei­ther aban­don or broaden our cor­re­spon­dence the­ory as a gen­eral the­ory of truth.

It should go with­out say­ing that if we adopt this ap­proach, we need not com­pro­mise our re­al­ism; how we use the word ‘truth’ is a lin­guis­tic mat­ter, not a deep meta­phys­i­cal one.

...al­though our ‘log­i­cal’ state­ments seem to de­pend on the world — at a min­i­mum, on our lin­guis­tic choices, our deriva­tion rules, etc. — they don’t seem to de­pend on there be­ing a literal wor­ldly cor­re­late for what they as­sert. Truth-con­di­tions and rep­re­sen­ta­tional con­tent come apart rad­i­cally.

What’s miss­ing from this part, to keep it from ad­e­quately ad­dress­ing the ques­tion (com­bined with the ear­lier post on the na­ture of logic)?

To com­pare a men­tal image of high-level ap­ple-ob­jects to phys­i­cal re­al­ity, for it to be true un­der a cor­re­spon­dence the­ory of truth, doesn’t re­quire that ap­ples be fun­da­men­tal in phys­i­cal law. A sin­gle dis­crete el­e­ment of fun­da­men­tal physics is not the only thing that a state­ment can ever be com­pared-to. We just need truth con­di­tions that cat­e­go­rize the low-level states of the uni­verse, so that differ­ent low-level phys­i­cal states are in­side or out­side the men­tal image of “some ap­ples on the table” or al­ter­na­tively “a kit­ten on the table”.

...And thus “The product of the ap­ple num­bers is six” is mean­ingful, con­strain­ing the pos­si­ble wor­lds. It has a truth-con­di­tion, fulfilled by a mix­ture of phys­i­cal re­al­ity and log­i­cal val­idity; and the cor­re­spon­dence is nailed down by a mix­ture of causal refer­ence and ax­io­matic pin­point­ing.

I. ‘Valid’ is a bad word for what Eliezer’s talk­ing about, be­cause val­idity is a prop­erty of ar­gu­ments, proofs, in­fer­ences, not of in­di­vi­d­ual as­ser­tions. For now, I’ll call Eliezer’s val­idity ‘deriv­abil­ity’ or ‘prov­abil­ity.’

II. Strictly speak­ing, is log­i­cal deriv­abil­ity a kind of truth, or is it an al­ter­na­tive to truth that some­times gets con­fused with it? Eliezer seems to al­ter­nate be­tween these two views.

III. Are some state­ments sim­ply ‘valid’ /​ ‘deriv­able’? Or is val­idity/​deriv­abil­ity always rel­a­tive to a set of in­fer­ence rules (and, in some cases, ax­ioms or as­sump­tions)?

IIII. If deriv­abil­ity is always rel­a­tivized in this way, then what does it mean to say that “The product of the ap­ple num­bers is six” is true in virtue of a mix­ture of phys­i­cal re­al­ity and log­i­cal deriv­abil­ity? A differ­ent set of log­i­cal or math­e­mat­i­cal rules would have yielded a differ­ent re­sult. ‘Log­i­cal pin­point­ing’ is meant to solve this — there is a unique imag­i­nary, fic­tional, math­e­mat­i­cal, etc. image that we’re rea­son­ing with in ev­ery case, and ‘in­tu­ition­is­tic real num­bers’ sim­ply aren’t the same ob­jects as ‘con­ven­tional real num­bers,’ and there sim­ply is no such thing as ‘the real num­bers’ ab­sent the afore­men­tioned speci­fi­ca­tions. Should we say, then, that truth is bi­va­lent, whereas deriv­abil­ity/​val­idity is triva­lent?

Here’s an ex­am­ple of where this sort of rea­son­ing will lead us: First, there sim­ply isn’t any such thing as a ‘con­tinuum hy­poth­e­sis;’ we must ex­haus­tively spec­ify a set of in­fer­ence rules and ax­ioms/​as­sump­tions be­fore we can even en­ter­tain a dis­crete log­i­cal claim, much less eval­u­ate that claim’s deriv­abil­ity. Once we have fully pin­pointed the ex­pres­sion, say as the ‘con­ven­tional con­tinuum hy­poth­e­sis’ or the ‘con­sis­tent Zer­melo-Frankel con­tinuum hy­poth­e­sis,’ we then ar­rive at the con­clu­sion that the hy­poth­e­sis is not true (since it is log­i­cal and not em­piri­cal); nor is it false; nor is it valid/​deriv­able; nor is its nega­tion valid/​deriv­able. It is thus ‘in­valid’ in the weak sense that it can’t be de­rived, but is not ‘in­valid’ in the strong sense of be­ing dis­prov­able. So, again, we have rea­son to speak of three prop­er­ties (per­haps: prov­able, un­prov­able, dis­prov­able), rather than of a bi­va­lent ‘val­idity.’

V. Sup­pos­ing cor­re­spon­dence-con­di­tions fix truth-con­di­tions, what fixes the cor­re­spon­dence-con­di­tions? Re­lat­edly, what makes as­ser­tions have the par­tic­u­lar con­tents and refer­ents they do, what sup­plies the ‘se­man­tic glue’? And do log­i­cal or mixed-refer­ence truths re­fer to any­thing in the world? If so, to what?

I es­pe­cially like point/​ques­tions V. If we aban­don the cor­re­spon­dence the­ory of truth, can we duck the ques­tions? Be­cause an­swer­ing them seems like a lot of work, and like Dilbert and his office mates, I love the sweet smell of un­nec­es­sary work.

AFAIK, the propo­si­tion that “Log­i­cal and phys­i­cal refer­ence to­gether com­prise the mean­ing of any mean­ingful state­ment” is origi­nal-as-a-whole (with many com­po­nent pieces prece­dented hither and yon). Like­wise I haven’t el­se­where seen the sug­ges­tion that the great re­duc­tion­ist pro­ject is to be seen in terms of an­a­lyz­ing ev­ery­thing into physics+logic.

An im­por­tant re­lated idea I haven’t gone into here is the idea that the phys­i­cal and log­i­cal refer­ences should be effec­tive or for­mal, which has been in the job de­scrip­tion since, if I re­call cor­rectly, the late nine­teenth cen­tury or so, when math­e­mat­ics was be­ing ax­io­m­a­tized for­mally for the first time. This pat is pop­u­lar, pos­si­bly ma­jori­tar­ian; I think I’d call it main­stream. See e.g. http://​​plato.stan­ford.edu/​​en­tries/​​church-tur­ing/​​ al­though log­i­cal speci­fi­a­bil­ity is more gen­eral than com­putabil­ity (this is also already-known).

Ob­vi­ously and un­for­tu­nately, the idea that you are not sup­posed to end up with more and more on­tolog­i­cally fun­da­men­tal stuff is not well-en­forced in main­stream philos­o­phy.

AFAIK, the propo­si­tion that “Log­i­cal and phys­i­cal refer­ence to­gether com­prise the mean­ing of any mean­ingful state­ment” is origi­nal-as-a-whole (with many com­po­nent pieces prece­dented hither and yon). Like­wise I haven’t el­se­where seen the sug­ges­tion that the great re­duc­tion­ist pro­ject is to be seen in terms of an­a­lyz­ing ev­ery­thing into physics+logic.

If we take in our hand any vol­ume; of di­v­inity or school meta­physics, for in­stance; let us ask, Does it con­tain any ab­stract rea­son­ing con­cern­ing quan­tity or num­ber? No. Does it con­tain any ex­per­i­men­tal rea­son­ing con­cern­ing mat­ter of fact and ex­is­tence? No. Com­mit it then to the flames: for it can con­tain noth­ing but sophistry and illu­sion.

David Hume, An En­quiry Con­cern­ing Hu­man Un­der­stand­ing (1748)

As Mar­do­nius says, 20th cen­tury log­i­cal em­piri­cism (also called log­i­cal pos­i­tivism or neopos­i­tivism) is ba­si­cally the same idea with “ab­stract rea­son­ing” fleshed out as “tau­tolo­gies in for­mal sys­tems” and “ex­per­i­men­tal rea­son­ing” fleshed out ini­tially as ” state­ments about sen­sory ex­pe­riences”. So the neopos­i­tivists’ origi­nal plan was to an­a­lyze ev­ery­thing, in­clud­ing physics, in terms of logic + sense data (similar to qualia, in mod­ern ter­minol­ogy). But some of them, like Neu­rath, con­sid­ered logic + physics a more suit­able foun­da­tion from the be­gin­ning, and oth­ers, like Car­nap, be­came even­tu­ally con­vinced of this as well, so the ma­ture neopos­i­tivist po­si­tion is quite similar to yours.

One key differ­ence is that for you (I think, cor­rect me if I am wrong) re­duc­tion­ism is an on­tolog­i­cal en­ter­prise, show­ing that the only “stuff” there is (in some vague sense) is logic and physics. For the neopos­i­tivists, such a state­ment would be as mean­ingless as the meta­physics they were try­ing to “com­mit to the flames”. Re­duc­tion­ism was a lin­guis­tic en­ter­prise: to de­velop a lan­guage in which ev­ery mean­ingful state­ment is trans­lat­able into sen­tences about physics (or qualia) and logic, in or­der to make the sci­ences more unified and co­her­ent and to do away with mud­dled meta­phys­i­cal thought.

There is no ar­ti­cle on Car­nap on the SEP, and I couldn’t find a clear state­ment on the Vienna Cir­cle ar­ti­cle, but there is a fairly good one in the Neu­rath ar­ti­cle:

In his clas­sic work Der Lo­gische Afbau der Welt (1928) (known as the Aufbau and trans­lated as The Log­i­cal Struc­ture of the World), Car­nap in­ves­ti­gated the log­i­cal ‘con­struc­tion’ of ob­jects of in­ter-sub­jec­tive knowl­edge out of the sim­plest start­ing point or ba­sic types of fun­da­men­tal en­tities (Rus­sell had urged in his late solu­tion to the prob­lem of the ex­ter­nal world to sub­sti­tute log­i­cal con­struc­tions for in­ferred en­tities). He in­tro­duced sev­eral pos­si­ble do­mains of ob­jects, one of which be­ing the psy­cholog­i­cal ob­jects of pri­vate sense ex­pe­rience—analysed as ‘el­e­men­tary ex­pe­riences’.

(…)

Neu­rath first con­fronted Car­nap on yet an­other alleged fea­ture of his sys­tem, namely, sub­jec­tivism. He promptly re­jected Car­nap’s pro­pos­als on the grounds that if the lan­guage and the sys­tem of state­ments that con­sti­tute sci­en­tific knowl­edge are in­ter­sub­jec­tive, then phe­nom­e­nal­ist talk of im­me­di­ate sub­jec­tive, pri­vate ex­pe­riences should have no place.

(…)

Fol­low­ing Neu­rath, Car­nap ex­plic­itly op­posed to the lan­guage of ex­pe­rience a nar­rower con­cep­tion of in­ter­sub­jec­tive phys­i­cal­ist lan­guage which was to be found in the ex­act quan­ti­ta­tive de­ter­mi­na­tion of physics-lan­guage re­al­ized in the read­ings of mea­sure­ment in­stru­ments. Re­mem­ber that for Car­nap only the struc­tural or for­mal fea­tures, in this case, of ex­act math­e­mat­i­cal re­la­tions (man­i­fested in the topolog­i­cal and met­ric char­ac­ter­is­tics of scales), can guaran­tee ob­jec­tivity. After the Aufbau, now the unity of sci­ence rested on the uni­ver­sal pos­si­bil­ity of the trans­la­tion of any sci­en­tific state­ment into phys­i­cal lan­guage—which in the long run might lead to the re­duc­tion of all sci­en­tific knowl­edge to the laws and con­cepts of physics.

The ma­ture Car­nap po­si­tion seems to be, then, not to re­duce ev­ery­thing to logic + fun­da­men­tal physics (elec­trons/​wave­func­tions/​etc), as per­haps you thought I had im­plied, but to re­duce ev­ery­thing to logic + ob­ser­va­tional physics (state­ments like “Voltime­ter read­ing = 10 volts”). The­o­ret­i­cal sen­tences about elec­trons and such are to be re­duced (in some sense that varied which differ­ent for­mu­la­tions) to sen­tences of ob­ser­va­tional physics. This does not mean that for Car­nap elec­trons are not “real”; as I said be­fore, re­duc­tion­ism was con­ceived as a lin­guis­tic pro­posal, not an on­tolog­i­cal the­sis.

Cu­cum­bers are both ex­pe­riences and mod­els, ac­tu­ally. You ex­pe­rience its sight, tex­ture and taste, you model this as a green veg­etable with cer­tain prop­er­ties which pre­dict and con­strain your similar fu­ture ex­pe­riences.

Num­bers, by com­par­i­son, are pure mod­els. That’s why peo­ple are of­ten con­fused about whether they “ex­ist” or not.

Are ex­pe­riences them­selves mod­els? If not, are you en­dors­ing the view that qualia are fun­da­men­tal?

Ex­pe­riences are, of course, them­selves a multi-layer com­bi­na­tion of mod­els and in­puts, and at some point you have to stop, but qualia seem to be at too high a level, given that they ap­pear to be re­ducible to phys­iol­ogy in most brain mod­els.

I’ve played with the idea that there is noth­ing but ex­pe­rience (Zen and the Art of Mo­tor­cy­cle Main­te­nance was rather con­vinc­ing). How­ever, it then be­comes sur­pris­ing that my ex­pe­rience gen­er­ally be­haves as though I’m liv­ing in a sta­ble uni­verse with such things as pre­vi­ously un­ex­pe­rienced cu­cum­bers show­ing up at plau­si­ble times.

I think there are three broadly prin­ci­pled and in­ter­nally con­sis­tent episte­molog­i­cal stances: Rad­i­cal skep­ti­cism, solip­sism, and re­al­ism. Rad­i­cal skep­ti­cism is prin­ci­pled be­cause it sim­ply de­mands ex­tremely high stan­dards be­fore it will as­sent to any propo­si­tion; solip­sism is prin­ci­pled be­cause it com­bines skep­ti­cism with the Carte­sian in­sight that I can be cer­tain of my own ex­pe­riences; and re­al­ism is prin­ci­pled be­cause it tries to ar­gue to the best ex­pla­na­tion for phe­nom­ena in gen­eral, ap­peal­ing to un­ex­pe­rienced posits that could plau­si­bly gen­er­ate the data at hand.

I do not tend to think so highly of ideal­is­tic and phe­nom­e­nal­is­tic views that fall some­where in be­tween solip­sism and re­al­ism; these I think are not as pris­tine and prin­ci­pled as the above three views, and their un­even ap­pli­ca­tion of skep­ti­cism (e.g., doubt­ing that mind-in­de­pen­dent cu­cum­bers ex­ist but re­fus­ing to doubt that Pla­tonic num­bers or Other Minds ex­ist) weak­ens their case con­sid­er­ably.

How do you know that un­ex­pe­rienced, un­mod­eled cu­cum­bers don’t ex­ist?

This ques­tion is mean­ingless in the frame­work I have de­scribed (Ex­pe­rience + mod­els = re­al­ity). If you provide an ar­gu­ment why this frame­work is not suit­able, i.e., it fails to be use­ful in a cer­tain situ­a­tion, feel free to give an ex­am­ple.

This ques­tion is mean­ingless in the frame­work I have de­scribed (Ex­pe­rience + mod­els = re­al­ity).

If com­mit­ment to your view ren­ders mean­ingless any dis­cus­sion of whether your view is cor­rect, then that counts against your view. We need to eval­u­ate the truth of “Ex­pe­rience + mod­els = re­al­ity” it­self, if you think the state­ment in ques­tion is true. (And if it isn’t true, then what is it?)

If you provide an ar­gu­ment why this frame­work is not suit­able, i.e., it fails to be use­ful in a cer­tain situ­a­tion, feel free to give an ex­am­ple.

Your lan­guage just sounds like an im­pov­er­ished ver­sion of my lan­guage. I can talk about mod­els of cu­cum­bers, and ex­pe­riences of cu­cum­bers; but I can also speak of cu­cum­bers them­selves, which are the spa­tiotem­po­rally ex­tended refer­ent of ‘cu­cum­ber,’ the ob­ject mod­eled by cu­cum­ber mod­els, and the ob­ject rep­re­sented by my ex­pe­ri­en­tial cu­cum­bers. Ex­pe­riences oc­cur in brains; mod­els are in brains, or in an ab­stract Pla­tonic realm; but cu­cum­bers are not, as a rule, in brains. They’re in gar­dens, re­friger­a­tors, gro­cery stores, etc.; and gar­dens and re­friger­a­tors and gro­cery stores are cer­tainly not in brains ei­ther, since they are too big to fit in a brain.

Another way to mo­ti­vate my con­cern: It is pos­si­ble that we’re all mis­taken about the ex­is­tence of cu­cum­bers; per­haps we’ve all been brain­washed to think they ex­ist, for in­stance. But to say that we’re mis­taken about the ex­is­tence of cu­cum­bers is not, in it­self, to say that we’re mis­taken about the ex­is­tence of any par­tic­u­lar ex­pe­rience or model; rather, it’s to say that we’re mis­taken about the ex­is­tence of a cer­tain phys­i­cal ob­ject, a thing in the world out­side our skulls. Your view ei­ther does not al­low us to be mis­taken about cu­cum­bers, or gives a com­pletely im­plau­si­ble anal­y­sis of what ‘be­ing mis­taken about cu­cum­bers’ means in or­di­nary lan­guage.

There may be a cer­rtain el­e­ment of cross pur­poses here. I’m pretty sure Car­nap was only seek­ing to re­duce sen­tences to epistemic com­po­nents, not re­duce re­al­ity to on­tolog­i­cal com­po­nen­nts. I’m not sure what Sh­minux is say­ing.

True. Ac­cu­rate. De­scribing how the world is. Cor­re­spond­ing to an ob­tain­ing fact. My ar­gu­ment is:

Cu­cum­bers are real.

Cu­cum­bers are not mod­els.

Cu­cum­bers are not ex­pe­riences.

There­fore some real things are nei­ther mod­els nor ex­pe­riences. (Real­ity is not just mod­els and ex­pe­riences.)

You could have ob­jected to any of my 3 premises, on the grounds that they are sim­ply false and that you have good ev­i­dence to the con­trary. But in­stead you’ve cho­sen to ques­tion what ‘cor­rect­ness’ means and whether my seem­ingly quite straight­for­ward ar­gu­ment is even mean­ingful. Not a very promis­ing start.

That the same mod­els achieve cor­re­lated /​ con­ver­gent such ra­tios across agents seems to be ev­i­dence that there is a unified some­thing el­se­where that mod­els can more ac­cu­rately match, or less ac­cu­rately match.

Note: I don’t un­der­stand all of this dis­cus­sion, so I’m not quite sure just how rele­vant or ad­e­quate this par­tic­u­lar defi­ni­tion/​re­duc­tion is.

What is “ob­tain­ing fact” but an­a­lyz­ing (=mod­el­ing) an ex­pe­rience?

That a fact ob­tains re­quires no anal­y­sis, mod­el­ing, or ex­pe­rienc­ing. For in­stance, if no think­ing be­ings ex­isted to an­a­lyze any­thing, then it would be a fact that there is no think­ing, no an­a­lyz­ing, no mod­el­ing, no ex­pe­rienc­ing. Since there would still be facts of this sort, ab­sent any an­a­lyz­ing or mod­el­ing by any be­ing, facts can­not be re­duced to ex­pe­riences or analy­ses of ex­pe­rience.

Yes, given that ex­pe­riences+mod­els=re­al­ity, cu­cum­bers are a sub­set of re­al­ity.

You still aren’t re­spond­ing to my ar­gu­ment. You’ve con­ceded premise 1, but you haven’t ex­plained why you think premise 2 or 3 is even open to rea­son­able doubt, much less out­right false.

∃x(cu­cum­ber(x))

∀x(cu­cum­ber(x) → ¬model(x))

∀x(cu­cum­ber(x) → ¬ex­pe­rience(x))

∴ ∃x(¬model(x) ∧ ¬ex­pe­rience(x))

This is a de­duc­tively valid ar­gu­ment (i.e., the truth of its premises ren­ders its con­clu­sion max­i­mally prob­a­ble). And it en­tails the false­hood of your as­ser­tion “Ex­pe­rience + mod­els = re­al­ity” (i.e., it at a min­i­mum en­tails the false­hood of ∀x(model(x) ∨ ex­pe­rience(x))). And all three of my premises are very plau­si­ble. So you need to give us some ev­i­dence for doubt­ing at least one of my premises, or your view can be re­jected right off the bat. (It doesn’t hurt that defend­ing your view will also help us un­der­stand what you mean by it, and why you think it bet­ter than the al­ter­na­tives.)

Sure, all coun­ter­fac­tu­als are mod­els. But there is a dis­tinc­tion be­tween coun­ter­fac­tu­als that model ex­pe­riences, coun­ter­fac­tu­als that model mod­els, and coun­ter­fac­tu­als that model phys­i­cal ob­jects. Cer­tainly not all mod­els are mod­els of mod­els, just as not all words de­note words, and not all thoughts are about thoughts.

When we build a model in which no ex­pe­riences or mod­els ex­ist, we find that there are still facts. In other words, a world can have facts with­out hav­ing ex­pe­riences or mod­els; nei­ther ex­pe­rience­less­ness nor mod­el­less­ness forces or en­tails the to­tal ab­sence of states of af­fairs. If x and y are not equiv­a­lent — i.e., if they are not true in all the same mod­els — then x and y can­not mean the same thing. So your sug­ges­tion that “ob­tain­ing fact” is iden­ti­cal to “an­a­lyz­ing (=mod­el­ing) an ex­pe­rience” is prov­ably false. Facts, cir­cum­stances, states of af­fairs, events — none of these can be re­duced to claims about mod­els and ex­pe­riences, even though we must use mod­els and ex­pe­riences in or­der to probe the mean­ings of words like ‘fact,’ ‘cir­cum­stance,’ ‘state of af­fairs.’ (For the same rea­son, ‘fact’ is not about words, even though ‘fact’ is a word and we must use words to ar­gue about what facts are.)

Not sure who that “we” is, but I’m cer­tainly not a part of that group.

Are you say­ing that when you model what the Earth was like prior to the ex­is­tence of the first sen­tient and rea­son­ing be­ings, you find that your model is of oblivion, of a com­pletely factless void in which there are no ob­tain­ing cir­cum­stances? You may need to get your re­al­ity-simu­la­tor re­paired.

Any­way, judg­ing by the down­votes, peo­ple seem to be get­ting tired of this de­bate, so I am dis­en­gag­ing.

I haven’t got­ten any down­votes for this dis­cus­sion. If you’ve been get­ting some, it’s much more likely be­cause you’ve re­fused to give any pos­i­tive ar­gu­ments for your as­ser­tion “ex­pe­rience + mod­els = re­al­ity” than be­cause peo­ple are ‘tired of this de­bate.’ If you started giv­ing us rea­sons to ac­cept that state­ment, you might see that change.

Valid point. Any “ger­ry­man­dered” defi­ni­tions should be done with the in­tent to clar­ify or sim­plify the solu­tion to­wards a prob­lem, and I’d only eval­u­ate them on their pre­dic­tive use­ful­ness, not how you can use them to re­ject or en­force ar­gu­ments in de­bates.

Even just take the old log­i­cal pos­tivist doc­trine about an­a­lyt­ic­ity/​syn­thet­ic­ity: all state­ments are ei­ther “an­a­lytic” (i.e. true by logic (near enough)), or syn­thetic (true due to ex­pe­rience). That’s at least on the same track. And I’m pretty sure they wouldn’t have had a prob­lem with state­ments that were par­tially both.

Ob­vi­ously and un­for­tu­nately, the idea that you are not sup­posed to end up with more and more on­tolog­i­cally fun­da­men­tal stuff in­side your philos­o­phy is not main­stream.

I think I must be mi­s­un­der­stand­ing what you’re say­ing here be­cause some­thing very similar to this is prob­a­bly the prin­ci­ple ac­cu­sa­tion re­lied upon in meta­phys­i­cal de­bates (if not the very top, cer­tainly top 3). So let me out­line what is stan­dard in meta­phys­i­cal dis­cus­sions so that I can get clear on whether you’re mean­ing some­thing differ­ent.

In meta­physics, peo­ple dis­t­in­guish be­tween quan­ti­ta­tive and qual­i­ta­tive par­si­mony. Quan­ti­ta­tive parisi­mony is about the amount of stuff your the­ory is com­mit­ted to (so a the­ory ac­cord­ing to which more planets ex­ist is less quan­ti­ta­tively par­si­mo­nious than an al­ter­na­tive). Most meta­physi­ci­ans don’t care about quan­ta­tive par­si­mony. On the other hand, qual­i­ta­tive par­si­mony is about the types of stuff that your the­ory is com­mit­ted to. So if a the­ory is com­mit­ted to cau­sa­tion and time, this would be less qual­i­ta­tively par­si­mo­nious than one that that was only com­mit­ted to cau­sa­tion (just an ex­am­ple, not meant to be an ac­tual case). Qual­i­ta­tive par­si­mony is seen to be one of the key fea­tures of a de­sir­able meta­phys­i­cal the­ory. Ac­cu­sa­tions that your the­ory pos­tu­lates ex­tra on­tolog­i­cal stuff but doesn’t gain fur­ther ex­plana­tory power for do­ing so is ba­si­cally the go to stan­dard ac­cu­sa­tion against a meta­phys­i­cal the­ory.

Fun­da­men­tal­ity is also a ma­jor philo­soph­i­cal is­sue—the idea that some stuff you pos­tu­late is on­tolog­i­cally fun­da­men­tal and some isn’t. Fun­da­men­tal­ity views are nor­mally cou­pled with the view that what re­ally mat­ters is qual­i­ta­tive par­si­mony of fun­da­men­tal stuff (rather than stuff gen­er­ally).

So how does this differ from the claim that you’re say­ing is not main­stream?

The claim might just need cor­rec­tion to say, “Many philoso­phers say that sim­plic­ity is a good thing but the re­quire­ment is not en­forced very well by philos­o­phy jour­nals” or some­thing like that. I think I be­lieve you, but do you have an ex­am­ple cita­tion any­way? (SEP en­tries or other un­gated pa­pers are in gen­eral good; I’m look­ing for an ex­am­ple of an idea be­ing crit­i­cized due to lack of meta­phys­i­cal par­si­mony.) In par­tic­u­lar, can we find e.g. any­one crit­i­ciz­ing modal logic be­cause pos­si­bil­ity shouldn’t be ba­sic be­cause meta­phys­i­cal par­si­mony?

In terms of Lewis, I don’t know of some­one crit­i­cis­ing him for this off-hand but it’s worth not­ing that Lewis him­self (in his book On the Plu­ral­ity of Wor­lds) recog­nises the par­si­mony ob­jec­tion and feels the need to defend him­self against it. In other words, even those who in­tro­duce un­par­si­mo­nious the­o­ries in philos­o­phy are ex­pected to at least defend the fact that they do so (of course, many peo­ple may fail to meet these stan­dards but the ex­pec­ta­tion is there and the­o­ries reg­u­larly get dis­missed and ig­nored if they don’t give a good ac­count­ing of why we should ac­cept their un­par­si­mo­nious na­ture).

Quine’s pa­per On What There Is is ba­si­cally an at­tack on views that hold that we need to ac­cept the ex­is­tence of things like pe­ga­sus (be­cause oth­er­wise what are we talk­ing about when we say “Pe­ga­sus doesn’t ex­ist”). Per­haps a ridicu­lous de­bate but it’s worth not­ing that one of Quine’s main mo­ti­va­tions is that this view is ex­tremely un­par­si­mo­nious.

From mem­ory, some pro­po­nents of EDT sup­port this the­ory be­cause they think that we can achieve the same re­sults as CDT (which they think is right) in a more par­si­mo­nious way by do­ing so (no link for that how­ever as that’s just vague rec­ol­lec­tion).

I’m not ac­tu­ally a meta­physi­cian so I can’t give an en­tire roll call of ex­am­ples but I’d say that the par­si­mony ob­jec­tion is the most com­mon one I hear when I talk to meta­physi­ci­ans.

It still may be hard to re­solve when some­thing is as sim­ple as pos­si­ble.

So modal re­al­ism (the idea that pos­si­ble wor­lds ex­ist con­cretely) has been high­lighted a few times in this thread as an un­par­si­mo­nious the­ory but Lewis has two re­sponses to this:

1.) This is (at least mostly) quan­ti­ta­tive un­par­si­mony not qual­i­ta­tive (lots of stuff, not lots of types of stuff). It’s un­clear how bad quan­ti­ta­tive un­par­si­mony is. Speci­fi­cally, Lewis ar­gues that there is no differ­ence be­tween pos­si­ble wor­lds and ac­tual wor­lds (ac­tu­al­ity is in­dex­i­cal) so he ar­gues that he doesn’t pos­tu­late two types of stuff (ac­tu­al­ity and pos­si­bil­ity) he just pos­tu­lates a lot more of the stuff that we’re already com­mit­ted to. Of course, he may be com­mit­ted to uni­corns as well as goats (which the non-re­al­ist isn’t) but then you can ask whether he’s re­ally com­mit­ted to more fun­da­men­tal stuff than we are.

2.) Lewis ar­gues that his the­ory can ex­plain things that no-one else can so even if his the­ory is less par­si­mo­nious, it gives re­wards in re­turn for that cost.

Now many peo­ple will ar­gue that Lewis is wrong, per­haps on both counts but the point is that even with the case that’s been used al­most as a bench­mark for un­par­si­mo­nious philos­o­phy in this thread, it’s not as sim­ple as “Lewis pos­tu­lates two types of stuff when he doesn’t need to, there­fore, clearly his the­ory is not as sim­ple as pos­si­ble.”

The Great Re­duc­tion­ist Pro­ject can be seen as figur­ing out how to ex­press mean­ingful sen­tences in terms of a >com­bi­na­tion of phys­i­cal refer­ences (state­ments whose truth-value is de­ter­mined by a truth-con­di­tion di­rectly >cor­re­sp­nd­ing to the real uni­verse we’re em­bed­ded in) and log­i­cal refer­ences (valid im­pli­ca­tions of premises, >or el­e­ments of mod­els pinned down by ax­ioms); where both phys­i­cal refer­ences and log­i­cal refer­ences are to >be de­scribed ‘effec­tively’ or ‘for­mally’, in com­putable or log­i­cal form. (I haven’t had time to go into this last part >but it’s an already-pop­u­lar idea in philos­o­phy of com­pu­ta­tion.)

And the Great Re­duc­tion­ist Th­e­sis can be seen as the propo­si­tion that ev­ery­thing mean­ingful can be >ex­pressed this way even­tu­ally.

Which, to my ad­mit­tedly rusty knowl­edge of mid 20th cen­tury philos­o­phy, sounds ex­tremely similar to the anti-meta­physics po­si­tion of Car­nap circa 1950. His work on Ram­sey sen­tences, if I re­call, was an at­tempt to re­duce mixed state­ments in­clud­ing the­o­ret­i­cal con­cepts (“ap­ple­ness”) to a state­ment con­sist­ing purely of Log­i­cal and Ob­ser­va­tional Terms. I’m fairly sure I saw some­thing very similar to your writ­ings in his late work re­gard­ing Mo­dal Logic, but I’m clearly go­ing to have to dig up the spe­cific pas­sage.

Amus­ingly, this en­deavor also sounds like your arch-neme­sis David Chalmers’ new pro­ject, Con­struct­ing the World. Some of his mod­er­ate re­sponses to var­i­ous philo­soph­i­cal puz­zles may ac­tu­ally be quite use­ful to you in dis­miss­ing sundry skep­ti­cal ob­jec­tions to the re­duc­tive pro­ject; from what I’ve seen, his du­al­ism isn’t in­dis­pens­able to the in­ter­est­ing parts of the work.

Just to say that in gen­eral, apart from the stuff about con­scious­ness, which I dis­agree with but think is in­ter­est­ing, I think that Chalmers is one of the best philoso­phers al­ive to­day. Se­ri­ously, he does a lot of good work.

It’s too bad EY is deeply ide­olog­i­cally com­mit­ted to a differ­ent po­si­tion on AI, be­cause oth­er­wise his philos­o­phy seems to very closely par­allel John Searle’s. Searle is clearer on some points and EY is clearer on oth­ers, but other than the AI stuff they take a very similar ap­proach.

EDIT: To be clear, John Searle has writ­ten a lot, lot more than the one pa­per on the Chi­nese Room, most of it hav­ing noth­ing to do with AI.

So… ad­mit­tedly my main ac­quain­tance with Searle is the Chi­nese Room ar­gu­ment that brains have ‘spe­cial causal pow­ers’, which made me not par­tic­u­larly in­ter­ested in in­ves­ti­gat­ing him any fur­ther. But the Chi­nese Room ar­gu­ment makes Searle seem like an ob­vi­ous non-re­duc­tion­ist with re­spect to not only con­scious­ness but even mean­ing; he de­nies that an ac­count of mean­ing can be given in terms of the for­mal/​effec­tive prop­er­ties of a rea­soner. I’ve been ren­der­ing con­struc­tive ac­counts of how to build mean­ingful thoughts out of “merely” effec­tive con­stituents! What part of Searle is sup­posed to be par­allel to that?

I guess I must have mi­s­un­der­stood some­thing some­where along the way, since I don’t see where in this se­quence you provide “con­struc­tive ac­counts of how to build mean­ingful thoughts out of ‘merely’ effec­tive con­stituents” . In­deed, you ex­plic­itly say “For a state­ment to be … true or al­ter­na­tively false, it must talk about stuff you can find in re­la­tion to your­self by trac­ing out causal links.” This strikes me as par­allel to Searle’s view that con­scious­ness im­poses mean­ing.

But, more gen­er­ally, Searle says his life’s work is to ex­plain how things like “money” and “hu­man rights” can ex­ist in “a world con­sist­ing en­tirely of phys­i­cal par­ti­cles in fields of force”; this strikes me as akin to your Great Re­duc­tion­ist Pro­ject.

Searle says his life’s work is to ex­plain how things like “money” and “hu­man rights” can ex­ist in “a world con­sist­ing en­tirely of phys­i­cal par­ti­cles in fields of force”;

Some­one should tell him this has already been done: dis­solv­ing that kind of con­fu­sion is liter­ally part of LessWrong 101, i.e. the Mind Pro­jec­tion Fal­lacy. Money and hu­man rights and so forth are prop­er­ties of minds mod­el­ing par­ti­cles, not prop­er­ties of the par­ti­cles them­selves.

That this is still his (or any other philoso­pher’s) life’s work is kind of sad, ac­tu­ally.

I guess my phras­ing was un­clear. What Searle is try­ing to do is gen­er­ate re­duc­tions for things like “money” and “hu­man rights”; I think EY is try­ing to do some­thing similar and it takes him more than just one ar­ti­cle on the Mind Pro­jec­tion Fal­lacy. (Even once you es­tab­lish that it’s prop­er­ties of minds, not par­ti­cles, there’s still a lot of work left to do.)

Or maybe Searle is tack­ling a much harder ver­sion of the prob­lem, for in­stance ex­plain­ing how things like hu­man rights and ethics can be bind­ing or obli­ga­tory on peo­ple when they are “all in the mind”, ex­plain­ing why one per­son should be be­holden to an­other’s mind pro­jec­tion.

Note that “should be be­holden” is a con­cept from within an eth­i­cal sys­tem; so in­vok­ing it in refer­ence to an en­tire eth­i­cal sys­tem is a cat­e­gory er­ror.

Also, I feel that the se­quences do pretty well at ex­plain­ing the in­stru­men­tal rea­sons that agents with goals have ethics; even ethics which may, in some cir­cum­stances, pro­hibit reach­ing their goals.

This strikes me as par­allel to Searle’s view that con­scious­ness im­poses mean­ing.

Why? Did I men­tion con­scious­ness some­where? Is there some rea­son a non-con­scious soft­ware pro­gram hooked up to a sen­sor, couldn’t do the same thing?

I don’t think Searle and I agree on what con­sti­tutes a phys­i­cal par­ti­cle. For ex­am­ple, he thinks ‘phys­i­cal’ par­ti­cles are al­lowed to have spe­cial causal pow­ers apart from their merely for­mal prop­er­ties which cause their sen­tences to be mean­ingful. So far as I’m con­cerned, when you tell me about the struc­ture of some­thing’s effects on the par­ti­cle fields, there shouldn’t be any­thing left af­ter that—any­thing left is ex­t­ra­phys­i­cal.

Searle’s views have noth­ing to do with at­tribut­ing novel prop­er­ties to fun­da­men­tal par­ti­cles. They are more to do with iden­ti­fy­ing men­tal prop­er­ties with higher-levle phys­i­cal
prop­er­ties, which are them­selves ir­re­ducible in a sense (but also re­ducible in an­other sense).

It’s too bad EY is deeply ide­olog­i­cally com­mit­ted to a differ­ent po­si­tion on AI, be­cause oth­er­wise his philos­o­phy seems to very closely par­allel John Searle’s

Per­haps I’m con­fused, but isn’t Searle the guy who came up with that stupid Chi­nese Room thing? I don’t see at all how that’s re­motely par­allel to LW philos­o­phy, or why it would be a bad thing to be ide­olog­i­cally op­posed to his ap­proach to AI. (He seems to think it’s im­pos­si­ble to have AI, af­ter all, and ar­gues from the bot­tom line for that po­si­tion.)

I was talk­ing about Searle’s non-AI work, but since you brought it up, Searle’s view is:

qualia ex­ists (be­cause: we ex­pe­rience it)

the brain causes qualia (be­cause: if you cut off any other part of some­one they still seem to have qualia)

if you simu­late a brain with a Tur­ing ma­chine, it won’t have qualia (be­cause: qualia is clearly a ba­sic fact of physics and there’s no way just us­ing physics to tell whether some­thing is a Tur­ing-ma­chine-simu­lat­ing-a-brain or not)

I think the first point is un­founded (or mis­guided). We do things (like mov­ing, and think­ing). We no­tice and can re­port that we’ve done things, and oc­ca­sion­ally we no­tice and can re­port that we’ve no­ticed that we’ve done some­thing. That we can re­port how things ap­pear to a part of us that can re­flect upon stim­uli is not im­por­tant enough to be called ‘quaila’. That we no­tice that we find ex­pe­rience ‘in­ef­fable’ is not a sur­prise ei­ther—you would not ex­pect the brain to be able to re­port ev­ery­thing that oc­curs, down to the neu­rons firing (or atoms mov­ing).
So, all we re­ally have is the abil­ity to no­tice and re­port that which has been ad­van­ta­geous for us to re­port in the evolu­tion­ary his­tory of the hu­man (these stim­uli that we can no­tice are called ‘ex­pe­riences’). There is noth­ing mys­te­ri­ous here, and the word ‘qualia’ always seems to be used mys­te­ri­ously—so I don’t think the first point car­ries the weight it might ap­pear to.

if you simu­late a brain with a Tur­ing ma­chine, it won’t have qualia (be­cause: qualia is clearly a ba­sic fact of physics and there’s no way just us­ing physics to tell whether some­thing is a Tur­ing-ma­chine-simu­lat­ing-a-brain or not)

Qualia is not clearly a ba­sic fact of physics. I made the point that we would not ex­pect a species de­signed by nat­u­ral se­lec­tion to be able to re­port or com­pre­hend its most de­tailed, in­ner work­ings, solely on the ev­i­dence of what it can re­port and no­tice. But this is all skirt­ing around the core idea of LessWrong: The map is not the ter­ri­tory. Just be­cause some­thing seems fun­da­men­tal does not mean it is. Just be­cause it seems like a Tur­ing ma­chine couldn’t be do­ing con­scious­ness, doesn’t mean that is how it is. We need to un­der­stand how it came to be that we feel what we feel, be­fore go mak­ing big claims about the fun­da­men­tal na­ture of re­al­ity. This is what is worked on in LessWrong, not in Searle’s philos­o­phy.

That we no­tice that we find ‘in­ef­fable’ is not a sur­prise ei­ther—you would not ex­pect the brain to be able to re­port ev­ery­thing that oc­curs, down to the neu­rons firing (or atoms mov­ing)That we no­tice that we find ‘in­ef­fable’ is not a sur­prise ei­ther—you would not ex­pect the brain to be able to re­port ev­ery­thing that oc­curs, down to the neu­rons firing (or atoms mov­ing)

If the in­ef­fabiity of qualia is down to the com­plex­ity of fine-grained neu­ral be­havi­our, then the ques­tion is why is any­thing ef­fable—peo­ple can com­mu­ni­cate about all sorts of things that aren’t sen­sa­tions (and in many cases are ab­stract and “in the head”).

I’m not sure that I fol­low. Can any­thing we talk about be re­duced to less than the ba­sic stim­uli we no­tice our­selves hav­ing?

All words (that mean any­thing) re­fer to some­thing. When I talk about ‘gui­tars’, I re­mem­ber ex­pe­riences I’ve had which I as­so­ci­ate with the word (i.e. gui­tars). Most hu­mans have similar make­ups, in that we learn in similar ways, and ex­pe­rience in similar ways (I’m just talk­ing about the psy­cholog­i­cal unity of hu­mans, and how far our brain de­sign is from, say, mice). So, we can talk about things, be­cause we’ve learnt to re­fer cer­tain ex­pe­riences (words) to oth­ers (gui­tars).

Nei­ther of the two can re­fer to any­thing other to the ex­pe­riences we have. Any­thing we talk about is in re­la­tion to our ex­pe­riences (Or pos­si­bly even mean­ingless).

Most of the clas­sic re­duc­tions are re­duc­tions to things be­neath per­ceiv­able stim­uli,eg heat to molec­u­lar mo­tion. Re­duc­tion­ism and physial­ism would be in very bad trou­ble if lan­guage and con­cpet­u­al­is­tion grounded out where per­cep­tion does. The the­ory also mis­pre­dicts that we woul be able com­mu­ni­cate our sen­sa­tions , but strug­gle
to com­mu­ni­cate ab­stract (eg math­e­mataical) ideas with a dis­tant rleation­ship, or no re­la­tion­ship to sens­sa­tion. In fact, the clas­sic re­duc­tions are to the ba­sic en­tities of phy­iscs, which are ul­ti­mately defined math­e­mat­i­cally, and of­ten hard to hard to vi­su­al­ise or oth­er­wise re­late to sen­sa­tion.

You could point out the differ­ent con­stituents of ex­pe­rience that feel fun­da­men­tal, but they them­selves (e.g. Red) don’t feel as though they are made up of any­thing more than them­selves.

When we talk about atoms, how­ever, that isn’t a ba­sic piece of mind that mind can talk about. My mind feels as though it is con­sti­tuted of qualia, and it can re­fer to atoms. I don’t ex­pe­rience an atom, I ex­pe­rience large groups of them, in com­plex ar­range­ments. I can re­fer to the atom us­ing larger, com­plex ar­range­ments of neu­rons (atoms). Even though, when my mind asks what the ba­sic parts of re­al­ity are, it has a chain of refer­ence point­ing to atoms, each part of that chain is a set of neu­ral con­nec­tions, that don’t feel re­ducible.

Even on re­flec­tion, our ex­pe­riences re­duce to qualia. We de­duce that qualia are made of atoms, but that doesn’t mean that our ex­pe­rience feels like its been re­duced to atoms.

I’m say­ing that we should ex­pect ex­pe­rience to feel as if made of fun­da­men­tal, in­ef­fable parts, even though we know that it is not. So, qualia aren’t the prob­lem for a tur­ing ma­chine they ap­pear to be.

Also, we all share these ex­pe­rience ‘parts’ with most other hu­mans, due to the psy­cholog­i­cal unity of hu­mankind. So, if we’re all sat down at an early age, and drilled with cer­tain pat­terns of mind parts (times-ta­bles), then we should ex­pect to be able to draw upon them at ease.

My origi­nal point, how­ever, was just that the map isn’t the ter­ri­tory. Qualia don’t get spe­cial at­ten­tion just be­cause they feel differ­ent. They have a perfectly nat­u­ral ex­pla­na­tion, and you don’t get to make game-chang­ing claims about the ter­ri­tory un­til you’ve made sure your map is pretty spot-on.

I’m say­ing that we should ex­pect ex­pe­rience to feel as if made of fun­da­men­tal, in­ef­fable parts, even though we know that it is not.

I don ’t see why. Say­ing that epe­rience is re­ally com­plex neu­rall ac­tivity isn’t enough to ex­plain that, be­cause thought
is re­ally com­plex neu­ral ac­tivity as well, and we can com­mini­cate and un­pack con­cepts.

So, qualia aren’t the prob­lem for a tur­ing ma­chine they ap­pear to be.

Can you write the code for SeeRed() ? Or are you say­ing that TMs would have in­ef­fable con­cepts?

. Qualia don’t get spe­cial at­ten­tion just be­cause they feel differ­ent. They have a perfectly nat­u­ral ex­pla­na­tion,

You’ve in­verted the prob­lem: you have cre­atd the ex­pec­ta­tion that noth­ing men­tal is ef­fable.

No, I’m say­ing that no ba­sic, men­tal part will feel ef­fable. Us­ing our cog­ni­tion, we can make com­plex no­tions of atoms and gui­tars, built up in our minds, and these will ex­plain why our men­tal as­pects feel fun­da­men­tal, but they will still feel fun­da­men­tal.

I’m say­ing that there are (some­thing like) cer­tain con­structs in the brain, that are used when­ever the most sim­ple con­scious thought or feel­ing is ex­pressed. They’re even used when we don’t choose to ex­press some­thing, like when we look at some­thing. We im­me­di­ately see it’s com­po­nents (sur­faces, legs, han­dles), and the ones we can’t break down (lines, colours) feel like the most ba­sic parts of those rep­re­sen­ta­tions in our minds.

Per­haps the con­struct that we iden­tify as red, is set of neu­rons XYZ firing. If so, when­ever we no­tice (that is, other sets of neu­rons ob­serve) that XYZ go off, we just take it to be ‘red’. It re­ally ap­pears to be red, and none of the other work­ings of the neu­rons can break it any fur­ther. It feels in­ef­fable, be­cause we are not privy to ev­ery­thing that’s go­ing on. We can sim­ply use a very re­stricted por­tion of the brain, to ex­am­ine other chunks, and give them differ­ent la­bels.

How­ever, we can use other neu­ronal pat­terns, to re­fer to and talk about atoms. Large groups of com­plex neu­ral firings can ob­serve and re­flect upon ex­per­i­men­tal re­sults that show that the brain is made of atoms.

Now, even though we can build up a model of atoms, and prove that the ba­sic fea­tures of con­scious ex­pe­rience (red­ness, lines, the hear­ing of a mid­dle C) are made of atoms, the fact is, we’re still us­ing com­plex neu­ronal pat­terns to think about these. The atom may be fun­da­men­tal, but it takes a lot of com­plex­ity for me to think about the atom. Con­scious­ness re­ally is re­ducible to atoms, but when I in­spect con­scious­ness, it still feels like a big com­plex set of neu­rons that my con­scious brain can’t un­der­stand. It still feels fun­da­men­tal.

Ex­pe­ri­en­tially, red­ness doesn’t feel like atoms be­cause our con­scious minds can­not re­duce it in ex­pe­rience, but they can prove that it is re­ducible. Peo­ple make the jump that, be­cause com­plex pat­terns in one part of the brain (one con­scious part) can­not re­duce an­other (con­scious) part to mere atoms, it must be a fun­da­men­tal part of re­al­ity. How­ever, this does not fol­low log­i­cally—you can’t as­sume your con­scious ex­pe­rience can com­pre­hend ev­ery­thing you think and feel at the most fun­da­men­tal level, purely by re­flec­tion.

I feel I’ve gone on too long, in try­ing to give an ex­am­ple of how some­thing could feel ba­sic but not be. I’m just say­ing we’re not privy to ev­ery­thing that’s go­ing on, so we can’t make mas­sive knowl­edge claims about it i.e. that a tur­ing-ma­chine couldn’t ex­pe­rience what we’re ex­pe­rienc­ing, purely by ap­peal to re­flec­tion. We just aren’t re­flec­tively trans­par­ent.

I can’t re­ally speak for LW as a whole, but I’d guess that among the peo­ple here who don’t be­lieve¹ “qualia doesn’t ex­ist”, 1 and 2 are fine, but we have is­sues with 3, as ex­panded be­low. Re­lat­edly, there seems be some con­fu­sion be­tween the “bor­ing AI” propo­si­tion, that you can make com­put­ers do rea­son­ing, and Searle’s “strong AI” thing he’s try­ing to re­fute, which says that AIs run­ning on com­put­ers would have both con­scious­ness and some mag­i­cal “in­ten­tion­al­ity”. “Strong AI” shouldn’t ac­tu­ally con­cern us, ex­cept in talk­ing about EMs or try­ing to make our FAI non-con­scious.

3. if you simu­late a brain with a Tur­ing ma­chine, it won’t have qualia

Pretty much dis­agree.

qualia is clearly a ba­sic fact of physics

Really dis­agree.

and there’s no way just us­ing physics to tell whether some­thing is a Tur­ing-ma­chine-simu­lat­ing-a-brain or not

And this seems re­ally un­likely.

¹ I qual­ify my state­ment like this be­cause there is a long-stand­ing con­fu­sion over the use of the word “qualia” as de­scribed in my par­en­thet­i­cal here.

Well, let’s be clear: the ar­gu­ment I laid out is try­ing to re­fute the claim that “I can cre­ate a hu­man-level con­scious­ness with a Tur­ing ma­chine”. It doesn’t mean you couldn’t cre­ate an AI us­ing some­thing other than a pure Tur­ing ma­chine and it doesn’t mean Tur­ing ma­chines can’t do other smart com­pu­ta­tions. But it does mean that up­load­ing a brain into a Von Neu­mann ma­chine isn’t go­ing to keep you al­ive.

So if you dis­agree that qualia is a ba­sic fact of physics, what do you think it re­duces to? Is there any­thing else that has a first-per­son on­tol­ogy the way qualia does?

And if you think physics can tell whether some­thing is a Tur­ing-ma­chine-simu­lat­ing-a-brain, what’s the phys­i­cal al­gorithm for look­ing at a se­ries of phys­i­cal par­ti­cles and de­cid­ing whether it’s ex­e­cut­ing a par­tic­u­lar com­pu­ta­tion or not?

So if you dis­agree that qualia is a ba­sic fact of physics, what do you think it re­duces to?

Some­thing brains do, ob­vi­ously. One way or an­other.

And if you think physics can tell whether some­thing is a Tur­ing-ma­chine-simu­lat­ing-a-brain, what’s the phys­i­cal al­gorithm for look­ing at a se­ries of phys­i­cal par­ti­cles and de­cid­ing whether it’s ex­e­cut­ing a par­tic­u­lar com­pu­ta­tion or not?

I should per­haps be ask­ing what ev­i­dence Searle has for think­ing he knows things like what qualia is, or what a com­pu­ta­tion is. My state­ments were both nega­tive: it is not clear that qualia is a ba­sic fact of physics; it is not ob­vi­ous that you can’t de­scribe com­pu­ta­tion in phys­i­cal terms. Searle just makes these as­sump­tions.

If you must have an an­swer, how about this: a phys­i­cal sys­tem P is a com­pu­ta­tion of a value V if adding as premises the ini­tial and fi­nal states of P and a tran­si­tion func­tion de­scribing the physics of P short­ens a for­mal proof that V = what­ever.

if you simu­late a brain with a Tur­ing ma­chine, it won’t have qualia (be­cause: qualia is clearly a ba­sic fact of physics and there’s no way just us­ing physics to tell whether some­thing is a Tur­ing-ma­chine-simu­lat­ing-a-brain or not)

There’s your prob­lem. Why the hell should we as­sume that “qualia is clearly a ba­sic fact of physics ”?

Well, I prob­a­bly can’t ex­plain it as elo­quently as oth­ers here—you should try the search bar, there are prob­a­bly posts on the topic much bet­ter than this one—but my po­si­tion would be as fol­lows:

Qualia are ex­pe­rienced di­rectly by your mind.

Every­thing about your mind seems to re­duce to your brain.

There­fore, qualia are prob­a­bly part of your brain.

Fur­ther­more, I would point out two things: one, that qualia seem to be es­sen­tial parts of hav­ing a mind; I cer­tainly can’t imag­ine a mind with­out qualia; and two, that we can view (very roughly) images of what peo­ple see in the tha­la­mus, which would sug­gest that what we call “qualia” might sim­ply be part of, y’know, data pro­cess­ing.

Re #1: I cer­tainly agree that we ex­pe­rience things, and that there­fore the causes of our ex­pe­rience ex­ist. I don’t re­ally care what name we at­tach to those causes… what mat­ters is the thing and how it re­lates to other things, not the la­bel. That said, in gen­eral I think the la­bel “qualia” causes more trou­ble due to con­cep­tual bag­gage than it re­solves, much like the la­bel “soul”.

Re #2: This ar­gu­ment is over­sim­plis­tic, but I find the con­clu­sion likely. More pre­cisely: there are things out­side my brain (like, say, my adrenal glands or my tes­ti­cles) that al­ter cer­tain as­pects of my ex­pe­rience when re­moved, so it’s pos­si­ble that the causes of those as­pects reside out­side my brain. That said, I don’t find it likely; I’m in­clined to agree that the causes of my ex­pe­rience reside in my brain. I still don’t care much what la­bel we at­tach to those causes, and I still think the la­bel “qualia” causes more con­fu­sion due to con­cep­tual bag­gage than it re­solves.

Re #3: I see no rea­son at all to be­lieve this. The causes of ex­pe­rience are no more “clearly a ba­sic fact of physics” than the causes of grav­ity; all that makes them seem “clearly ba­sic” to some peo­ple is the fact that we don’t un­der­stand them in ad­e­quate de­tail yet.

the brain causes qualia (be­cause: if you cut off any other part of some­one they still seem to have qualia)

if you simu­late a brain with a Tur­ing ma­chine, it won’t have qualia (be­cause: qualia is clearly a ba­sic fact of physics and there’s no way just us­ing physics to tell whether some­thing is a Tur­ing-ma­chine-simu­lat­ing-a-brain or not)

Which part does LW dis­agree with and why?

The whole thing: it’s the Chi­nese Room all over again, a in­tu­ition pump that begs the very ques­tion it’s pur­port­edly an­swer­ing. (Begin­ning an ar­gu­ment for the ex­is­tence of qualia with a bare as­ser­tion that they ex­ist is a lit­tle more ob­vi­ous than the way that the word “un­der­stand­ing” is fudged in the Chi­nese Room ar­gu­ment, but ba­si­cally it’s the same.)

I sup­pose you could say that there’s a grudg­ing par­tial agree­ment with your point num­ber two: that “the brain causes qualia”. The rest of what you listed, how­ever, is drivel, as is easy to see if you sub­sti­tute some other term be­sides “qualia”, e.g.:

Free will ex­ists (be­cause: we ex­pe­rience it)

The brain causes free will (be­cause if you cut off any part, etc.)

If you simu­late a brain with a Tur­ing ma­chine, it won’t have free will be­cause clearly it’s a ba­sic fact of physics and there’s no way to tell just us­ing physics whether some­thing is a ma­chine simu­lat­ing a brain or not.

It doesn’t mat­ter what term you plug into this in place of “qualia” or “free will”, it could be “love” or “char­ity” or “in­ter­est in death metal”, and it’s still not say­ing any­thing more profound than, “I don’t think ma­chines are as good as real peo­ple, so there!”

Or more pre­cisely: “When I think of peo­ple with X it makes me feel some­thing spe­cial that I don’t feel when I think of ma­chines with X, there­fore there must be some spe­cial qual­ity that sep­a­rates peo­ple from ma­chines, mak­ing ma­chine X ‘just a simu­la­tion’.” This is the root of all these Searle-ian ar­gu­ments, and they are triv­ially dis­solved by un­der­stand­ing that the spe­cial feel­ing peo­ple get when they think of X is also a prop­erty of how brains work.

Speci­fi­cally, the thing that drives these ar­gu­ments is our in­built ma­chin­ery that clas­sifies things as mind-hav­ing or not-mind-hav­ing, for pur­poses of pre­dic­tion-mak­ing. But the feel­ing that we get that a thing is mind-hav­ing or not-mind-hav­ing is based on what was use­ful evolu­tion­ar­ily, not on what the ac­tual truth is. Sear­lian (Surly?) ar­gu­ments are thus in ex­actly the same camp as any other faith-based ar­gu­ment: ele­vat­ing one’s feel­ings to Truth, ir­re­spec­tive of the ev­i­dence against them.

(Begin­ning an ar­gu­ment for the ex­is­tence of qualia with a bare as­ser­tion that they ex­ist is a lit­tle more ob­vi­ous than the way that the word “un­der­stand­ing” is fudged in the Chi­nese Room ar­gu­ment, but ba­si­cally it’s the same.)

Just a nit pick: the ar­gu­ment Aaron pre­sented wasn’t an ar­gu­ment for the ex­is­tence of qualia, and so tak­ing the ex­is­tence of qualia as a premise doesn’t beg the ques­tion. Aaron’s ar­gu­ment was an ar­gu­ment agains ar­tifi­cial con­scious­ness.

Also, I think Aaron’s pre­sen­ta­tion of (3) was a bit un­clear, but it’s not so bad a premise as you think. (3) says that since qualia are not re­ducible to purely phys­i­cal de­scrip­tions, and since a brain-simu­lat­ing tur­ing-ma­chine is en­tirely re­ducible to purely phys­i­cal de­scrip­tions, brain-simu­lat­ing tur­ing-ma­chines won’t ex­pe­rience qualia. So if we have qualia, and count as con­scious in virtue of hav­ing qualia (1), then brain-simu­lat­ing tur­ing ma­chines won’t count as con­scious. If we don’t have qualia, i.e. if all our men­tal states are re­ducible to purely phys­i­cal de­scrip­tions, then the ar­gu­ment is un­sound be­cause premise (1) is false.

You’re right that you can plug many a term in to re­place ‘qualia’, so long as those things are not re­ducible to purely phys­i­cal de­scrip­tions. So you couldn’t plug in, say, heart-at­tacks.

This is the root of all these Searle-ian ar­gu­ments, and they are triv­ially dis­solved by un­der­stand­ing that the spe­cial feel­ing peo­ple get when they think of X is also a prop­erty of how brains work.

Could you ex­plain this a bit more? I don’t see how it’s rele­vant to the ar­gu­ment. Searle is not ar­gu­ing on the ba­sis of any spe­cial feel­ings. This seems like a straw man to me, at the mo­ment, but I may not be ap­pre­ci­at­ing the flaws in Searle’s ar­gu­ment.

the ar­gu­ment Aaron pre­sented wasn’t an ar­gu­ment for the ex­is­tence of qualia, and so tak­ing the ex­is­tence of qualia as a premise doesn’t beg the question

In or­der for the ar­gu­ment to make any sense, you have to buy into sev­eral as­sump­tions which ba­si­cally are the ar­gu­ment. It’s “qualia are spe­cial be­cause they’re spe­cial, QED”. I thought about call­ing it cir­cu­lar rea­son­ing, ex­cept that it seems closer to beg­ging the ques­tion. If you have a bet­ter way to put it, by all means share.

Could you ex­plain this a bit more? I don’t see how it’s rele­vant to the ar­gu­ment. Searle is not ar­gu­ing on the ba­sis of any spe­cial feel­ings. This seems like a straw man to me, at the mo­ment, but I may not be ap­pre­ci­at­ing the flaws in Searle’s ar­gu­ment.

When I said that our mind de­tec­tion cir­cuitry was the root of the ar­gu­ment, I didn’t mean that Searle was overtly ar­gu­ing on the ba­sis of his feel­ings. What I’m say­ing is, the only ev­i­dence for Searle-type premises are the feel­ings cre­ated by our mind-de­tec­tion cir­cuitry. If you as­sume these feel­ings mean some­thing, then Searle-ish ar­gu­ments will seem cor­rect, and Searle-ish premises will seem ob­vi­ous be­yond ques­tion.

How­ever, if you truly grok the mind-pro­jec­tion fal­lacy, then Searle-type premises are just as ob­vi­ously non­sen­si­cal, and there’s no rea­son to pay any at­ten­tion to the ar­gu­ments built on top of them. Even as ba­sic a tool as Ra­tion­al­ist Ta­boo suffices to de­bunk the premises be­fore the ar­gu­ment can get off the ground.

you have to buy into sev­eral as­sump­tions which ba­si­cally are the ar­gu­ment.

Any vald ar­gu­ment has a con­clu­sion that is en­ti­ailed by its premises taken jointly. Cir­cu­lar­ity is when the whole con­clu­sion is en­tailed by one premise, with the oth­ers be­ing win­dow-dress­ing.

you have to buy into sev­eral as­sump­tions which ba­si­cally are the ar­gu­ment.

I think there is a way that ripe toma­toes seem vi­su­ally: how is that mind-pro­jec­tion.

But … if you’re as­sum­ing that qualia are “not re­ducible to purely phys­i­cal de­scrip­tions”, and you need qualia to be con­scious, then ob­vi­ously brain-simu­la­tions wont be con­scious. But those as­sump­tions seem to be the bulk of the po­si­tion he’s defend­ing, aren’t they?

But those as­sump­tions seem to be the bulk of the po­si­tion he’s defend­ing, aren’t they?

Right, the ar­gu­ment comes down, for most of us, to the first premise: do we or do we not have men­tal states ir­re­ducible to purely phys­i­cal con­di­tions. Aaron didn’t pre­sent an ar­gu­ment for that, he just pre­sented Searle’s ar­gu­ment against AI from that. But you’re right to ask for a defense of that premise, since it’s the cru­cial one and it’s (at the mo­ment) un­defended here.

Pre­sent­ing an ob­vi­ous re­sult of a nonob­vi­ous premise as if it was a nonob­vi­ous con­clu­sion seems sus­pi­cious, as if he’s try­ing to trick listen­ers into ac­cept­ing his con­clu­sion even when their pri­ors differ.

Pre­sent­ing a triv­ial con­clu­sion from non­triv­ial premises as a non­triv­ial con­clu­sion seems suspicious

Not only sus­pi­cious, but im­pos­si­ble: if the premises are non-triv­ial, the con­clu­sion is non-triv­ial.

In ev­ery ar­gu­ment, the con­clu­sion fol­lows straight away from the premises. If you ac­cept the premises, and the ar­gu­ment is valid, then you must ac­cept the con­clu­sion. The con­clu­sion does not need any fur­ther sup­port.

. (3) says that since qualia are not re­ducible to purely phys­i­cal de­scrip­tions, and since a brain-simu­lat­ing tur­ing-ma­chine is en­tirely re­ducible to purely phys­i­cal de­scrip­tions, brain-simu­lat­ing tur­ing-ma­chines won’t ex­pe­rience qualia.

To pick a fur­ther nit, the ar­gu­ment is more that qualia can’t be en­g­ineered into an AI. If an AI im­ple­men­ta­tion has qualia at all, it would be serendipi­tous.

To pick a fur­ther nit, the ar­gu­ment is more that qualia can’t be en­g­ineered into an AI. If an AI im­ple­men­ta­tion has qualia at all, it would be serendipi­tous.

That’s a pos­si­bil­ity, but not as I laid out the ar­gu­ment: if be­ing con­scious en­tails hav­ing qualia, and if qualia are all ir­re­ducible to purely phys­i­cal de­scrip­tions, and ev­ery state of a turn­ing ma­chine is re­ducible to a purely phys­i­cal de­scrip­tion, then tur­ing ma­chines can’t simu­late con­scious­ness. That’s not very neat, but I do be­lieve it’s valid. Your al­ter­na­tive is plau­si­ble, but it re­quires my ‘turn­ing ma­chines are re­ducible to purely phys­i­cal de­scrip­tions’ premise to be false.

Begin­ning an ar­gu­ment for the ex­is­tence of qualia with a bare as­ser­tion that they exist

Huh? This isn’t an ar­gu­ment for the ex­is­tence of qualia—it’s an at­tempt to figure out whether you be­lieve in qualia or not. So I take it you dis­agree with step one, that qualia ex­ists? Do you think you are a philo­soph­i­cal zom­bie?

I do think es­sen­tially the same ar­gu­ment goes through for free will, so I don’t find your re­duc­tio at all con­vinc­ing. There’s no rea­son, how­ever, to be­lieve that “love” or “char­ity” is a ba­sic fact of physics, since it’s fairly ob­vi­ous how to re­duce these. Do you think you can re­duce qualia?

I don’t un­der­stand why you think this is a claim about my feel­ings.

Sup­pose that neu­ro­scien­tists some day show that the quale of see­ing red matches a cer­tain brain struc­ture or a neu­ron firing pat­tern or a neuro-chem­i­cal pro­cess in all hu­mans. Would you then say that the quale of red has been re­duced?

Imag­ine a flash­light with a red piece of cel­lo­phane over it pointed at a wall. Scien­tists some day dis­cover that the red dot on the wall is caused by the flash­light—it ap­pears each and ev­ery time the flash­light fires and only when the flash­light is firing. How­ever, the red dot on the wall is cer­tainly not the same as the flash­light: one is a flash­light and one is a red dot.

The red dot, on the other hand, could be re­duced to some sort of in­ter­ac­tion be­tween cer­tain fre­quen­cies of light-waves and wall-atoms and so on. But it will cer­tainly not get re­duced to flash­lights.

By the same to­ken, you are not go­ing to re­duce the-sub­jec­tive-ex­pe­rience-of-see­ing-red to neu­rons; sub­jec­tive ex­pe­riences aren’t made out of neu­rons any more than red dots are made of flash­lights.

By the same to­ken, you are not go­ing to re­duce the-sub­jec­tive-ex­pe­rience-of-see­ing-red to neu­rons; sub­jec­tive ex­pe­riences aren’t made out of neu­rons any more than red dots are made of flash­lights.

Ok, that’s where we dis­agree. To me the sub­jec­tive ex­pe­rience is the pro­cess in my brain and noth­ing else.

By the same to­ken, you are not go­ing to re­duce the-sub­jec­tive-ex­pe­rience-of-see­ing-red to neu­rons; sub­jec­tive ex­pe­riences aren’t made out of neu­rons any more than red dots are made of flash­lights.

I think that any­one talk­ing se­ri­ously about “qualia” is con­fused, in the same way that any­one talk­ing se­ri­ously about “free will” is.

That is, they’re words peo­ple use to de­scribe ex­pe­riences as if they were ob­jects or ca­pa­bil­ities. Free will isn’t some­thing you have, it’s some­thing you feel. Same for “qualia”.

I do think es­sen­tially the same ar­gu­ment goes through for free will

Dis­solv­ing free will is con­sid­ered an en­try-level philo­soph­i­cal ex­er­cise for Less­wrong. If you haven’t cov­ered that much of the se­quences home­work, it’s un­likely that you’ll find this dis­cus­sion es­pe­cially en­light­en­ing.

(More to the point, you’re do­ing the rough equiv­a­lent of bug­ging peo­ple on a news­group about a ques­tion that is an­swered in the FAQ or an RTFM.)

the neu­ron firing pat­tern is pre­sum­ably the cause of the quale, it’s cer­tainly not the quale it­self.

And you seem to con­sider this self-ev­i­dent. Well, it seemed self-ev­i­dent to me that Martha’s phys­i­cal re­ac­tion would ‘be’ a quale. So where do we go from there?

(Sup­pose your neu­rons re­acted all the time the way they do now when you see or­ange light, ex­cept that they couldn’t con­nect it to any­thing else—no similar­i­ties, no differ­ences, no links of any kind. Would you see any­thing?)

Have you also read the mini-se­quence I linked? In the grand­par­ent I said “phys­i­cal re­ac­tion” in­stead of “func­tional”, which seems like a mis­take on my part, but I as­sumed you had some vague idea of where I’m com­ing from.

I guess it re­ally de­pends on what you mean by free will. If by free will, pjeby meant some kind of qual­i­ta­tive ex­pe­rience, then it strikes me that what he means by it is just a form of qualia and so of course the qualia ar­gu­ment goes through. If he means by it some­thing more com­pli­cated, then I don’t see how point one holds (we ex­pe­rience it), and the ar­gu­ment ob­vi­ously doesn’t go through.

Begin­ning an ar­gu­ment for the ex­is­tence of qualia with a bare as­ser­tion that they ex­ist

But that’s not con­tentious. Qualia are things like the ap­pearence of toma­toes or taste of lemon. I’ve seen toma­toes and tasted lemons.

This is the root of all these Searle-ian ar­gu­ments, and they are triv­ially dis­solved by un­der­stand­ing that the spe­cial feel­ing peo­ple get when they think of X is also a prop­erty of how brains work.

But Searle says that feelngs, un­der­stand­ing, etc are prop­er­ties of how the brain works. What he ar­gues against is the claim that they are com­pu­ta­tional prop­er­ties. But it is also un­con­tentious that physi­claism can be true and com­pu­ta­tion­al­ism false.

if you simu­late a brain with a Tur­ing ma­chine, it won’t have qualia (be­cause: qualia is clearly a ba­sic fact of physics and there’s no way just us­ing physics to tell whether some­thing is a Tur­ing-ma­chine-simu­lat­ing-a-brain or not)

It isn’t even clear to Searle that qualia are phys­i­cally ba­sic. He thinks con­scious­ness is a a high-level out­come
of the brain’s con­crete causal pow­ers. His ob­jec­tion to com­putaional ap­po­raches is rooted in the ab­stract na­ture of com­pu­ta­tion, not in the physcial ba­sisc­ness of qualia. (In fact, he doesn’t use the word “qualia”, al­though he of­ten seems to be talk­ing about the same thing).

I found the use of mul­ti­pli­ca­tion par­tic­u­larly use­ful, since it forced the reader to pay at­ten­tion to the phys­i­cal/​log­i­cal dis­tinc­tion. If, say, ad­di­tion had been used, then a de­ter­mined reader could try to use phys­i­cal con­straints alone (though they would be cheat­ing).

If we as­sume that the 5 ap­ples are spher­i­cal, and we cut the largest square sec­tions pos­si­ble out of each of them (leav­ing the top and bot­tom alone, as that doesn’t af­fect whether the shape is a square when viewed from the top down), it turns out that these new squared ap­ples have a vol­ume of about 0.77 times that of a spher­i­cal ap­ple. That means that your 2 round ap­ples and your 3 round ap­ples be­come about 6.49 squared ap­ples. Round­ing down, that is, in fact, 6 square ap­ples.

But I do think the ille­gal op­er­a­tion was kind of the point. It shows that not all math­e­mat­i­cal op­er­a­tions can be strictly re­duced to phys­i­cal ob­jects (well, out­side of the sub­strate that’s do­ing the com­put­ing, ob­vi­ously).

This is an in­ter­est­ing post but, I have to say, kind of frus­trat­ing. I have tried to fol­low the dis­cus­sions be­tween Esar and Rob­bBB and your sub­stan­tial elu­ci­da­tion as well as many other great com­ments, but I re­main kind-of in the dark. Below are some ques­tions which I had, as I read.

This ques­tion doesn’t feel like it should be very hard.

What ques­tion? What ex­actly is the prob­lem you are pur­port­ing to solve, here? If it is, “What is the truth con­di­tion of ‘If we took the num­ber of ap­ples in each pile, and mul­ti­plied those num­bers to­gether, we’d get six.’”, then doesn’t Tarski’s dis­quo­ta­tion schema give us the an­swer?

Nav­i­gat­ing to the six re­quires a mix­ture of phys­i­cal and log­i­cal reference

Not sure why you ob­scure mat­ters with idiosyn­cratic metaphors like ‘nav­i­gat­ing to the six’, but never mind. Can we in­fer from the dis­tinc­tion be­tween log­i­cal and phys­i­cal refer­ence that there is a dis­tinc­tion be­tween log­i­cal and phys­i­cal truth? It ap­pears you coun­te­nance the An­a­lytic/​Syn­thetic dis­tinc­tion—pre­cisely the dis­tinc­tion which is usu­ally con­sid­ered to have un­done log­i­cal pos­i­tivism. Do you have a preferred re­sponse to Quine’s fa­mous ar­gu­ment in ‘Two Dog­mas of Em­piri­cism’, or do you have a rea­son for think­ing you are im­mune to it? I think you think you aren’t do­ing philos­o­phy, so it doesn’t ap­ply, but then I re­ally don’t know how to un­der­stand what you’re say­ing. If your prob­lems are just com­pu­ta­tional, then surely you’re mak­ing mat­ters much harder for your­self than they should be (not that com­pu­ta­tional prob­lems aren’t some­times very hard).

Next we have to call the stuff on the table ‘ap­ples’. But how, oh how can we do this...?

How about by say­ing “Those are ap­ples”? What ex­actly is the prob­lem, here?

...when grind­ing the uni­verse and run­ning it through a sieve will re­veal not a sin­gle par­ti­cle of ap­ple­ness?

Here’s my best guess at what is ex­er­cis­ing you. You rea­son that only those prop­er­ties needed to ac­count for the con­sti­tu­tion and be­havi­our of the small­est parts of mat­ter are real, that be­ing an ap­ple is not among them, and hence that be­ing an ap­ple is not a real prop­erty. As­sum­ing this guess is right, what ex­actly is your rea­son for ac­cept­ing the first premise? It is not im­me­di­ately ob­vi­ous, though I know there are tra­di­tion­ally differ­ent rea­sons. The rea­son will in­form the ad­e­quacy of your an­swer.

Stan­dard physics uses the same fun­da­men­tal the­ory to de­scribe the flight of a Boe­ing 747 air­plane, and col­li­sions in the Rel­a­tivis­tic Heavy Ion Col­lider. Nu­clei and air­planes al­ike, ac­cord­ing to our un­der­stand­ing, are obey­ing spe­cial rel­a­tivity, quan­tum me­chan­ics, and chro­mo­dy­nam­ics.

So far so good...

We also use en­tirely differ­ent mod­els to un­der­stand the aero­dy­nam­ics of a 747 and a col­li­sion be­tween gold nu­clei in the RHIC. A com­puter mod­el­ing the aero­dy­nam­ics of a 747 may not con­tain a sin­gle to­ken, a sin­gle bit of RAM, that rep­re­sents a quark. (Or a quan­tum field, re­ally; but you get the idea.)

Noth­ing con­tro­ver­sial here, but it of course has noth­ing to do with our un­der­stand­ing of the prob­lem. If the un­der­stand­ing is cor­rect, the prob­lem ex­ists re­gard­less of whether any­one or thing ever imag­ines or rep­re­sents or refers to ap­ples or any­thing else. To in­tro­duce rep­re­sen­ta­tions and mod­els into the dis­cus­sion is only to con­fuse mat­ters, no?

So is the 747 made of some­thing other than quarks?

Where does this ques­tion come from? If my guess about the prob­lem is cor­rect, it is ir­rele­vant. It may be that the prop­erty of be­ing a 747 (ap­ple) is not iden­ti­cal to the prop­erty of be­ing in any very com­pli­cated way com­posed of quarks, bosons and lep­tons, even though a given 747 (ap­ple) is made only of these par­ti­cles. The (philo­soph­i­cal) the­sis about prop­er­ties is differ­ent than the sci­en­tific the­sis about the con­sti­tu­tion of phys­i­cal ob­jects.

No, we’re just mod­el­ing the 747 with rep­re­sen­ta­tional el­e­ments that do not have a one-to-one cor­re­spon­dence with in­di­vi­d­ual quarks.
Similarly with ap­ples.

Please clar­ify—what pre­cisely does the re­la­tion be­tween a com­puter model of a 747 and a 747 have to do with the meta­physics of prop­er­ties?

To com­pare a men­tal image of high-level ap­ple-ob­jects to phys­i­cal re­al­ity,

Can you say what you mean by this? For my­self, this is somet­ing I only ever do very rarely -con­jure a men­tal image, then see how it agrees or differs from what I’m look­ing at. To be sure, sets of neu­rons in my brain are be­ing ac­ti­vated all the time by pat­terns of light hit­ting my reti­nas, but there’s a lot of ex­plana­tory dis­tance to cover to show these (the story about neu­ral events and the story about images) are the same thing. In any case, this seems en­tirely ir­rele­vant to the pre­sent con­cerns.

for it to be true

Are men­tal images the sorts of thngs which can be true (in the sense in which a sen­tence or propo­si­tion can be, as opp. merely ac­cu­rate)? Sup­pose I have a men­tal pic­ture of a cer­tain cat on a cer­tain table and that the cat is in­deed on the table. Is my men­tal image true? Even if the cat in my image is the wrong colour? Or is sit­ting when the cat is stand­ing? As far as I can see this isn’t just nit-pick­ing. You have some kind of AI model which in­volves men­tal images and which you seem to think needs a se­man­tic the­ory, and it’s just not clear how it all fits to­gether.

...doesn’t re­quire that ap­ples be fun­da­men­tal in phys­i­cal law.

If my guess is cor­rect, your an­swer to the prob­lem as far as I can see is some­thing like “The prob­lem is not a prob­lem”.

A sin­gle dis­crete el­e­ment of fun­da­men­tal physics is not the only thing that a state­ment can ever be com­pared-to. We just need truth con­di­tions that cat­e­go­rize the low-level states of the uni­verse, so that differ­ent low-level phys­i­cal states are in­side or out­side the men­tal image of “some ap­ples on the table” or al­ter­na­tively “a kit­ten on the table”.

Can you give an ex­am­ple of a low-level state be­ing ‘in­side a men­tal image’ of “some ap­ples on the table”? I re­ally don’t know what this means.

Hav­ing gone through this once, here’s a sec­ond pass at a gloss. You ac­cept, rea­son­ably, that “That is an ap­ple” is true in English iff that (point­ing to a cer­tain ap­ple) is an ap­ple. The refer­ent of the “that” we can take to be a cer­tain ob­ject. The ques­tion arises, how­ever, as to what the refer­ent or other se­man­tic value is of “is an ap­ple”. Plau­si­bly, it is the prop­erty of be­ing an ap­ple. But, we may rea­son­ably ask, what sort of thing is be­ing and ap­ple? I un­der­stand your an­swer is as fol­lows:

Just as an in­di­vi­d­ual ap­ple is noth­ing more than a quite large num­ber of quarks and lep­tons and bosons in­ter­est­ingly as­sem­bled, be­ing an ap­ple is noth­ing more than be­ing a quite large num­ber of quarks and lep­tons and bosons as­sem­bled in a cer­tain in­ter­est­ing way.

Is this roughly a fair un­der­stand­ing? If so, please con­sider:

1) You will need to aug­ment your story to in­clude so-called etiol­ogy. The prop­erty of be­ing a 10-dol­lar bill is not equiv­a­lent to the prop­erty of be­ing in a cer­tain way com­posed of mat­ter—causal ori­gin/​his­tory mat­ters, too (perfect coun­terfeits).

2) The prob­lem of vague­ness of­ten seems like a paradigm of philo­soph­i­cal fu­til­ity but it is a real prob­lem. Sup­pose you could cross-breed ap­ples and pears, and have a spec­trum of in­di­vi­d­u­als rang­ing from un­prob­le­matic ap­ple to un­prob­le­matic pear (= non-ap­ple). What will the truth-con­di­tion be of the state­ment ‘That is an ap­ple’, point­ing to the piece of fruit in the mid­dle? Do you give up on bi­valence, or do you say that the state­ment is de­ter­mi­nately true or false, but there are deep episte­molog­i­cal prob­lems? Nei­ther an­swer seems satis­fac­tory, and where you come down may af­fect your the­ory.

3) If this story is cor­rect, it will pre­sum­ably ap­ply to the whole very large hi­er­ar­chy of prop­er­ties, rang­ing from be­ing a quark through be­ing a pro­ton and be­ing a car­bon atom up to be­ing an ap­ple and be­yond. And the high-level prop­er­ties will have at a min­i­mum to be dis­junc­tions of lower prop­er­ties, even to ac­co­mo­date such mun­dane facts as the ex­is­tence of both green and red ap­ples. And you may find ul­ti­mately that what is in ques­tion is more like a fam­ily-re­sem­blance re­la­tion among the cases which con­sti­tute be­ing an ap­ple (if not ap­ples, then ta­bles and 747s, very likely). And then aren’t you in dan­ger sim­ply of la­bo­ri­ously re-ca­pitu­lat­ing the his­tory of 20th c. philo­soph­i­cal thought on the sub­ject?

This is all philos­o­phy, which you’ve re­peat­edly said you aren’t in­ter­ested in do­ing. But that’s what you’re do­ing! If you’re just do­ing AI, you re­ally shouldn’t be wast­ing your time on these ques­tions, surely. Re­search into neu­ral nets is already mak­ing great progress on the ques­tion of how we make the dis­crim­i­na­tions we do. Why isn’t that enough for your pur­poses?

A last thought: there’s some­thing of a de­bate on this site about the value of tra­di­tional philos­o­phy. I think it has value, a big part of which is that it en­courages peo­ple to think care­fully and to ex­press them­selves pre­cisely. I don’t claim always to be as care­ful or pre­cise as I should be, but these are val­ues. Do­ing an­a­lytic philos­o­phy is some of the best ra­tio­nal­ity train­ing you can get.

Re­gard­ing your point num­bered 1 speci­fi­cally: the causal his­tory of mat­ter is con­sid­ered here as part of its phys­i­cal prop­er­ties in a block uni­verse, so this ob­jec­tion doesn’t ap­ply. See the older se­quence ar­ti­cle Time­less Physics for more on this.

Re­gard­ing points 2 and 3: The OP is say­ing that for some­thing to be an ap­ple means that its low-level phys­i­cal state matches some pat­tern, but not nec­es­sar­ily that the pat­tern match­ing func­tion must re­turn a strict True or False; there are fuzzy pat­tern match­ing func­tions as well. The older se­quence ar­ti­cle Similar­ity Clusters goes into this in more de­tail.

On the other hand, your ob­jec­tions are to­tally le­git within the con­text of this ar­ti­cle and its ex­am­ples alone, and as an in­tro­duc­tory ar­ti­cle that’s a fine and ap­pro­pri­ate con­text to be work­ing from. Maybe the ar­ti­cle would be im­proved by some foot­notes and/​or ap­pro­pri­ate links? Then again, it’s already pretty long.

On the other other hand, as an in­tro­duc­tory ar­ti­cle its pur­pose is only to in­tro­duce what re­duc­tion­ism is and get peo­ple to grips with the no­tion of differ­ent lev­els of ab­strac­tion. If philo­soph­i­cal ar­gu­ments are be­ing made about e.g. what is “real”, or more sub­tly about what makes for an ap­pro­pri­ate defi­ni­tion of a word like “ap­ple”, then they aren’t be­ing made here, but in the ar­ti­cles that de­pend on this one. “Ly­ing to chil­dren” and all that.

where both phys­i­cal refer­ences and log­i­cal refer­ences are to be de­scribed ‘effec­tively’ or ‘for­mally’, in com­putable or log­i­cal form.

Can any­one say a bit more about why phys­i­cal refer­ences would need to be de­scribed ‘effec­tively’/​com­putably? Is this based on the as­sump­tion that the phys­i­cal uni­verse must be com­putable?

Can any­one say a bit more about why phys­i­cal refer­ences would need to be de­scribed ‘effec­tively’/​com­putably?

I think be­cause if they are de­scribed by an un­com­putable pro­ce­dure, one for ex­am­ple in­volv­ing or­a­cles or in­finite re­sources, then they (with very high prob­a­bil­ity) would not be able to be com­puted by our brains.

I have had this ques­tion in my mind for ages. You say that these coun­ter­fac­tual uni­verses don’t ac­tu­ally ex­ist. But, ac­cord­ing to Many-Wor­lds, don’t all lawful Uni­verses ac­tu­ally re­ally re­ally ex­ist? I mean, isn’t there some am­pli­tude for Mr. Oswald to not have shot Kennedy, and then you get a blob where Kennedy didn’t get mur­dered?

I’ve been bang­ing my head against a wall on this and still can’t come to a con­clu­sion. Are the de­co­her­ent blobs ac­tu­ally ca­pa­ble of cre­at­ing mul­ti­ple his­to­ries on the ob­serv­able level, up here? It looks, to me, that they should be. I mean, if these par­ti­cles have each an am­pli­tude to “be” here and there, then there is some am­pli­tude for the com­bi­na­tion of all par­ti­cles in the Uni­verse to cor­re­spond to a com­pletely differ­ent macro Uni­verse.

On the other hand, that seems to also im­ply that there’s some am­pli­tude for things like Pres­i­dent Kennedy be­ing shot, and then sud­denly his wounds closed and he was up and run­ning again. And that doesn’t sound okay at all.

Ab­strac­tions like prob­a­bil­ity and num­ber are con­structed by us; they don’t strictly ex­ist, but it’s use­ful to act as though they do, since they help or­ga­nize our rea­son­ing. It could be that by co­in­ci­dence that some part of the Real World cor­re­sponds pre­cisely to the struc­ture of our modal or math­e­mat­i­cal rea­son­ing; for in­stance, the many-wor­lds in­ter­pre­ta­tion of QM could be true, or we could live in a Teg­mark en­sem­ble. But this would still just be an in­ter­est­ing co­in­ci­dence. It wouldn’t change the fact that our ab­strac­tions are our own; and if we dis­cov­ered to­mor­row that a Bohmian in­ter­pre­ta­tion of QM is cor­rect, rather than an Everettian one, it would have no foun­da­tional im­pli­ca­tions for such a high-level, an­thro­pocen­tric phe­nom­ena as prob­a­bil­ity the­ory.

Think­ing in this way is use­ful for two rea­sons. First, it in­su­lates our log­i­cal fic­tions from meta­phys­i­cal skep­ti­cism; our un­cer­tainty as to the ex­is­tence of a Pla­tonic realm of Num­ber need not un­der­mine our con­fi­dence that 2 and 2 make 4. Se­cond, it keeps us from be­ing tempted to slide down the slip­pery slope to treat­ing all our fic­tions (like cur­rency, and in­ten­tion­al­ity, and qualia, and Sher­lock Holmes) as equally meta­phys­i­cally com­mit­ting.

Well, whether prob­a­bil­ity and num­ber ex­ist or not is moot. The point of fact is that when you look at any quan­tum sys­tem there is a prob­a­bil­ity of find­ing it in any given (con­tin­u­ous set of) state(s) equals the squared mod­u­lus of the am­pli­tude for it to be in such state. As mr. Yud­kowsky once put, and I para­phrase, “I still want to know the nonex­is­tent laws that co­or­di­nate my mean­ingless Uni­verse”.

And my point is: as­sum­ing Quan­tum Physics is com­pletely cor­rect, with­out us adding the ad­di­tional pos­tu­lates, do all com­bi­na­tions of uni­verses ex­ist, su­per­posed to each other? That is to say: is the quan­tum suicide limited to 50⁄50 strictly quan­tised ex­per­i­ments, or does our con­scious­ness live on in a for­ever branch­ing mul­ti­verse? Sort of.

Nit­pick: All the Many-Wor­lds of QM still fol­low our par­tic­u­lar set of physics. For “all lawful uni­verses” to re­ally re­ally ex­ist, you prob­a­bly have to go to Teg­mark IV or some­thing like that....

I have had this ques­tion in my mind for ages. You say that these coun­ter­fac­tual uni­verses don’t ac­tu­ally ex­ist. But, ac­cord­ing to Many-Wor­lds, don’t all lawful Uni­verses ac­tu­ally re­ally re­ally ex­ist? I mean, isn’t there some am­pli­tude for Mr. Oswald to not have shot Kennedy, and then you get a blob where Kennedy didn’t get mur­dered?

I had the same re­ac­tion… Can this be the same Eliezer who au­thored the se­quences, and gave such strong sup­port for the re­al­ity of Many Wor­lds?

I was half-ex­pect­ing the other shoe to drop some­where in the ar­ti­cle… namely that if you are pre­pared to ac­cept that the Many Wor­lds re­ally ex­ist, it makes the Great Re­duc­tion­ist Pro­ject a whole lot eas­ier. State­ments about causal­ity re­duce to state­ments about causal graphs, which in turn re­duce to state­ments about coun­ter­fac­tu­als, which in turn re­duce to state­ments of ac­tual fact about differ­ent blobs of the (real) quan­tum state vec­tor. Similarly, state­ments about phys­i­cal “pos­si­bil­ity” and “prob­a­bil­ity” re­duce to com­pli­cated state­ments about other blobs and their sizes as mea­sured by the in­ner product on the state space.

Maybe Eliezer will be lead­ing that way later… If he isn’t I share your con­fu­sion.

It was men­tioned that if you were to make a con­tin­u­ous ana­log of the Bayesian Net­work, you’d end up with space and time, or some such. Maybe if you have a prob­a­bil­is­tic Bayesian Net­work you get QM out of it? As in, any given par­ent node has a num­ber of child nodes, each hap­pen­ing with a cer­tain prob­a­bil­ity… and then if you make the con­tin­u­ous ana­log of such you’ll get Quan­tum Me­chan­ics and Many-Wor­lds.

Mr. Yud­kowsky has thor­oughly con­vinced me of the re­al­ity of Many-Wor­lds (and my on­go­ing study of Q.M. has not yet even sug­gested oth­er­wise), so… so what, then?

I have read about Bohmian Me­chan­ics be­fore, and it failed to con­vince me. This ar­ti­cle keeps talk­ing about ‘non-de­ter­minism’ in­her­ent to Q.M. but I’m pretty sure Rel­a­tive State is quite very de­ter­minis­tic. Also, adding the speci­fi­ca­tion of a par­ti­cle’s po­si­tion to a de­scrip­tion doesn’t sound at all to me like the sim­plest ex­pla­na­tion pos­si­ble.

Maybe this is just me say­ing I pre­fer lo­cal­ity to coun­ter­fac­tual definite­ness, but… Rel­a­tive State still wins my favour.

This ar­ti­cle keeps talk­ing about ‘non-de­ter­minism’ in­her­ent to Q.M.

Read: tra­di­tional Q.M. Ar­gu­ments for BM and for MW are both largely still re­spond­ing to Copen­hagenism’s legacy of col­lapse the­o­rists. The next stage in the di­alec­tic should be for them to set aside the easy tar­get of col­lapse and start go­ing for each oth­ers’ throats di­rectly.

Also, adding the speci­fi­ca­tion of a par­ti­cle’s po­si­tion to a de­scrip­tion doesn’t sound at all to me like the sim­plest ex­pla­na­tion pos­si­ble.

Does adding Mag­i­cal Real­ity Fluid and an in­finity of in­visi­ble Wor­lds sound sim­ple, at the out­set? MW seems sim­ple and el­e­gant be­cause it’s fa­mil­iar; this tempts us to for­get just how much re­mains un­re­solved by the the­ory, and just how much it de­mands that we posit be­yond the ex­per­i­men­tal ob­ser­va­tions. Let’s be care­ful not to let un­fa­mil­iar­ity tempt us into treat­ing BM in an asym­met­ric way. Bell’s way of fram­ing BM is very in­tu­itive, I think:

“Is it not clear from the smal­l­ness of the scin­til­la­tion on the screen that we have to do with a par­ti­cle? And is it not clear, from the diffrac­tion and in­terfer­ence pat­terns, that the mo­tion of the par­ti­cle is di­rected by a wave? De Broglie showed in de­tail how the mo­tion of a par­ti­cle, pass­ing through just one of two holes in screen, could be in­fluenced by waves prop­a­gat­ing through both holes. And so in­fluenced that the par­ti­cle does not go where the waves can­cel out, but is at­tracted to where they co­op­er­ate. This idea seems to me so nat­u­ral and sim­ple, to re­solve the wave-par­ti­cle dilemma in such a clear and or­di­nary way, that it is a great mys­tery to me that it was so gen­er­ally ig­nored.”

Ac­tu­ally, I’m some­what grate­ful that it was ig­nored (ex­cept by de Broglie), since its in­tu­itive­ness might oth­er­wise have be­come such a firm or­tho­doxy that we wouldn’t have the rich de­bate be­tween MW the­o­rists of to­day. Given our hu­man ten­dency to fix on our first solu­tion, it is very use­ful that the weak­est the­ory (col­lapse) is the one peo­ple started with.

Maybe this is just me say­ing I pre­fer lo­cal­ity to coun­ter­fac­tual definiteness

“Pre­fer” as in it sounds more el­e­gant, or as in it seems more likely to be true? Un­tan­gling those two is the real prob­lem. We also need to keep in mind that the MW style of lo­cal­ity is a rather strange one. (Con­sider MW the­o­ries on which wor­lds ‘split;’ does this con­ser­va­tion-of-en­ergy-vi­o­lat­ing split prop­a­gate out­ward at the speed of light? What ba­sis does it oc­cur in?)

it failed to con­vince me. [...] Rel­a­tive State still wins my favour.

Bayesian rea­son­ing isn’t bi­va­lent; our goal is not sim­ply to pick the Best Op­tion, but to try to roughly es­ti­mate how un­cer­tain we should be. For in­stance, at this point should we as­sign a .4 prob­a­bil­ity to BM? .1? .005?

I’m not con­vinced of BM ei­ther, but I take it se­ri­ously as the main ri­val to the en­tire MW fam­ily of in­ter­pre­ta­tions. I take Col­lapse in­ter­pre­ta­tions far less se­ri­ously, not just be­cause of their strange du­al­ism but be­cause they have more promise of be­ing em­piri­cally ver­ified (hence their lack of ver­ifi­ca­tion counts against them), whereas BM and MW don’t seem to be dis­t­in­guish­able. (Also, BM-style views pre­date Everett by decades, so one can’t make the case that BM is an ad-hoc dis­tor­tion of MW.)

Does adding Mag­i­cal Real­ity Fluid and an in­finity of in­visi­ble Wor­lds sound sim­ple, at the out­set?

That’s not at all what Rel­a­tive State states… it just states that the Schröd­inger Equa­tion is all there is, full stop. The ex­is­tence of a num­ber of wor­lds is a con­se­quence, not an as­sump­tion.

Bell’s way of fram­ing BM is very in­tu­itive, I think:

“Is it not clear from the smal­l­ness of the scin­til­la­tion on the screen that we have to do with a par­ti­cle? And is it not clear, from the diffrac­tion and in­terfer­ence pat­terns, that the mo­tion of the par­ti­cle is di­rected by a wave? De Broglie showed in de­tail how the mo­tion of a par­ti­cle, pass­ing through just one of two holes in screen, could be in­fluenced by waves prop­a­gat­ing through both holes. And so in­fluenced that the par­ti­cle does not go where the waves can­cel out, but is at­tracted to where they co­op­er­ate. This idea seems to me so nat­u­ral and sim­ple, to re­solve the wave-par­ti­cle dilemma in such a clear and or­di­nary way, that it is a great mys­tery to me that it was so gen­er­ally ig­nored.”

Please for­give me if I mi­s­un­der­stand, but that sounds, to me, just a way of mak­ing wave­func­tions fit into the in­tu­itive “par­ti­cle” and “wave” molds. And it also looks like it ig­nores the fact that peo­ple are made of par­ti­cles (wave­func­tions), so what­ever effects of any given par­ti­cle (wave­func­tion) are de­tected by us would cause us to be su­per­posed. I don’t… re­ally see a way out of be­ing su­per­posed at macro­scopic level.

“Pre­fer” as in it sounds more el­e­gant, or as in it seems more likely to be true? Un­tan­gling those two is the real prob­lem. We also need to keep in mind that the MW style of lo­cal­ity is a rather strange one. (Con­sider MW the­o­ries on which wor­lds ‘split;’ does this con­ser­va­tion-of-en­ergy-vi­o­lat­ing split prop­a­gate out­ward at the speed of light? What ba­sis does it oc­cur in?)

“Pre­fer” as in both sounds more el­e­gant and seems, to me, more likely to be true. Also, the con­ser­va­tion of en­ergy is never vi­o­lated, I don’t think, since we already had to mul­ti­ply the to­tal en­ergy by the nor­mal­ised am­pli­tude squared of the differ­ent states any­way.

Bayesian rea­son­ing isn’t bi­va­lent; our goal is not sim­ply to pick the Best Op­tion, but to try to roughly es­ti­mate how un­cer­tain we should be. For in­stance, at this point should we as­sign a .4 prob­a­bil­ity to BM? .1? .005?

I’m sorry, you’re right. What I meant by “failed to con­vince me” and “wins my favour” is that I still as­sign a > .5 prob­a­bil­ity to MW, or any in­ter­pre­ta­tion that doesn’t try to sneak away from macro­scopic su­per­po­si­tion, or tries to tell me physics is non-lo­cal. As I said, I have done my share of re­search on al­ter­na­tive in­ter­pre­ta­tions of Q.M. af­ter I started study­ing it (I’m not nearly done study­ing it, though) be­fore, and the one that sounded to me the sim­plest was MW.

I’m not con­vinced of BM ei­ther, but I take it se­ri­ously as the main ri­val to the en­tire MW fam­ily of in­ter­pre­ta­tions. I take Col­lapse in­ter­pre­ta­tions far less se­ri­ously, not just be­cause of their strange du­al­ism but be­cause they have more promise of be­ing em­piri­cally ver­ified (hence their lack of ver­ifi­ca­tion counts against them), whereas BM and MW don’t seem to be dis­t­in­guish­able. (Also, BM-style views pre­date Everett by decades, so one can’t make the case that BM is an ad-hoc dis­tor­tion of MW.)

I guess I don’t take it se­ri­ously be­cause, to my un­trained eyes, it looks like a the­ory that’s try­ing to es­cape quan­tum effects af­fect­ing the macro­scopic world by stick­ing macro­scopic in­tu­itions into the quan­tum world.

That’s not at all what Rel­a­tive State states… it just states that the Schröd­inger Equa­tion is all there is, full stop. The ex­is­tence of a num­ber of wor­lds is a con­se­quence, not an as­sump­tion.

Sure, but the the­ory with the sim­plest sound-bite ax­iom­a­ti­za­tion may not be the most par­si­mo­nious the­ory at the end of the day. And your con­fi­dence in that start­ing point will de­pend heav­ily on how con­fi­dent you are in the prospects for ex­tract­ing the Born prob­a­bil­ities from the Schröd­inger equa­tion on its lone­some. A the­ist will claim that his start­ing point is max­i­mally sim­ple rel­a­tive to its ex­plana­tory power—heck, one of his ax­ioms is that his start­ing point is max­i­mally sim­ple! that’s how sim­plic­ity works, right? -- but the difficulty of ac­tu­ally ex­tract­ing nor­mal­ity from the­ism with­out re­course to ‘deep mys­ter­ies’ un­der­mines the pro­ject in spite of its promis­ing con­ver­gences with the data.

that sounds, to me, just a way of mak­ing wave­func­tions fit into the in­tu­itive “par­ti­cle” and “wave” molds.

They aren’t in­tu­itive molds, in the sys­tem-1 sense; ‘par­ti­cle’ and ‘wave’ are the­o­ret­i­cal con­structs, and we un­der­stand them via (and im­port them from) struc­turally similar macro-phe­nom­ena. ‘Wave’ and ‘par­ti­cle’ are suffi­ciently sim­ple ideas, as macro-phe­nom­ena go, that they may re­cur at mul­ti­ple lev­els of or­ga­ni­za­tion. I don’t as­sume that they must do so; but it’s at least an idea worth as­sess­ing, if the re­sul­tant the­ory re­cap­tures the whole of nor­mal­ity with­out para­dox or mys­tery.

And it also looks like it ig­nores the fact that peo­ple are made of par­ti­cles (wave­func­tions), so what­ever effects of any given par­ti­cle (wave­func­tion) are de­tected by us would cause us to be su­per­posed.

The wave oc­curs at both po­si­tions (or with both spin com­po­nents); the par­ti­cle does not. Be­ing made of par­ti­cles, I have a de­ter­mi­nate brain-state, not a su­per­posed one; and I ob­serve a de­ter­mi­nate par­ti­cle po­si­tion, though the dy­naymics of that par­ti­cle (and of my brain-state) are guided by the wave func­tion. Many Wor­lds seems to pre­dict that I will both see a spin-up mea­sure­ment re­sult and a spin-down mea­sure­ment re­sult, when I ob­serve the su­per­posed state. But in fact I seem to ei­ther see spin-up or spin-down, not both. So at this sim­ple stage, Bohm cor­rectly pre­dicts our ob­ser­va­tion, and Many Wor­lds does not. That’s why the challenge for Many Wor­lds is to make sense of the prob­a­bil­is­tic el­e­ment of QM. The Schröd­inger dy­nam­ics leave no room for prob­a­bil­ity; they are, as you note, de­ter­minis­tic.

A mul­ti­verser may re­spond: ‘But we’ve come so far! We’ve made such progress! Surely we de­serve to be treated as the stan­dard view by this point. All that’s left is the small prob­lem of ex­plain­ing the emer­gence of the real.’ The Copen­hagenist and Bohmian hear this, and they think: But ac­count­ing for our ac­tual ob­ser­va­tions is the whole game. If you’ve suc­ceeded in ev­ery task ex­cept ac­tu­ally pre­dict­ing the Born prob­a­bil­ities, then what have you in fact gained, aside from a string-the­ory-style ed­ifice of el­e­gant ab­strac­tion? There’s the rub.

it looks like a the­ory that’s try­ing to es­cape quan­tum effects af­fect­ing the macro­scopic world by stick­ing macro­scopic in­tu­itions into the quan­tum world.

I un­der­stand your im­pulse, but it’s not as though we have a cache of ‘non-macro­scopic in­tu­itions’ to em­ploy in lieu of our macro­scopic ones. What we have are some el­e­gant for­mal­isms, which re­late to our ob­ser­va­tions in puz­zlingly reg­u­lar-but-non­lin­ear ways (the Born prob­a­bil­ities, the Pro­jec­tion Pos­tu­late). We then try to figure out what our el­e­gant for­mal­ism is say­ing; and if in the pro­cess of cash­ing out this for­mal­ism-we-don’t-un­der­stand, all we end up with are other, even more con­voluted for­mal­isms-we-don’t-un­der­stand, then we will have made no progress. This is not to say that the quan­tum world is obliged to match our in­tu­itions. It is only to say that for an in­ter­pre­ta­tion to even qual­ify as an in­ter­pre­ta­tion, it will have to give some con­tent to its for­mal­ism. As con­tent goes, ‘world-split­ting’ and ‘Mag­i­cal Real­ity Fluid’ is not much of an im­prove­ment, if im­prove­ment it is, over ‘par­ti­cle’ and ‘wave.’

As for my­self, I cur­rently as­sign about a .2 to Bohm, a .2 to all the pos­si­bly un­for­mu­lated hid­den vari­ables the­o­ries (if they ex­ist), and a .6 to mul­ti­verse-type the­o­ries, dom­i­nated by the many views that no one’s come up with yet. (The prob­a­bil­ity for col­lapse-type the­o­ries is too small to mat­ter here.) But I think most physi­cists who have a view on the is­sue as­sign a greater-than-.9 prob­a­bil­ity to their preferred var­i­ants of MW; and I haven’t seen ev­i­dence that they’ve grap­pled with the foun­da­tional ques­tions enough to war­rant that much con­fi­dence. A differ­ence of .3 is very large when the en­tire uni­verse is at stake; even if I would ul­ti­mately bet slightly against Bohm, con­sid­er­ing the level of dis­re­gard for his model, some of the most use­ful work will be in feign­ing mul­ti­verse hy­per­skep­ti­cism, and in par­tic­u­lar in challeng­ing MW to be­come more rigor­ous and ex­plicit in what it means with all this world-talk. Bohm may sound un­fash­ion­ably 19th-cen­tury at times, but at least it never sounds mys­ti­cal.

Sure, but the the­ory with the sim­plest sound-bite ax­iom­a­ti­za­tion may not be the most par­si­mo­nious the­ory at the end of the day. And your con­fi­dence in that start­ing point will de­pend heav­ily on how con­fi­dent you are in the prospects for ex­tract­ing the Born prob­a­bil­ities from the Schröd­inger equa­tion on its lone­some. A the­ist will claim that his start­ing point is max­i­mally sim­ple rel­a­tive to its ex­plana­tory power—heck, one of his ax­ioms is that his start­ing point is max­i­mally sim­ple! that’s how sim­plic­ity works, right? -- but the difficulty of ac­tu­ally ex­tract­ing nor­mal­ity from the­ism with­out re­course to ‘deep mys­ter­ies’ un­der­mines the pro­ject in spite of its promis­ing con­ver­gences with the data.

I meant not sim­plest as in sim­plest sound bite, I meant in the way mr. Yud­kowsky has painfully ex­plained el­se­where when he treated Oc­cam’s Ra­zor. One sin­gle equa­tion is always a sim­pler propo­si­tion than two; and a whole in­tel­li­gent be­ing that sparked Ex­is­tence it­self and is not made of parts is so far off the map it’s not even worth con­sid­er­ing as a pre­limi­nary hy­poth­e­sis.

The wave oc­curs at both po­si­tions (or with both spin com­po­nents); the par­ti­cle does not. Be­ing made of par­ti­cles, I have a de­ter­mi­nate brain-state, not a su­per­posed one; and I ob­serve a de­ter­mi­nate par­ti­cle po­si­tion, though the dy­naymics of that par­ti­cle (and of my brain-state) are guided by the wave func­tion. Many Wor­lds seems to pre­dict that I will both see a spin-up mea­sure­ment re­sult and a spin-down mea­sure­ment re­sult, when I ob­serve the su­per­posed state. But in fact I seem to ei­ther see spin-up or spin-down, not both. So at this sim­ple stage, Bohm cor­rectly pre­dicts our ob­ser­va­tion, and Many Wor­lds does not. That’s why the challenge for Many Wor­lds is to make sense of the prob­a­bil­is­tic el­e­ment of QM. The Schröd­inger dy­nam­ics leave no room for prob­a­bil­ity; they are, as you note, de­ter­minis­tic.

If you have any sys­tem that is in a given state A and that sys­tem in­ter­acts with an­other one that is in a su­per­po­si­tion of states X and Y, it no longer makes sense to talk about the first and sec­ond sys­tem: the whole sys­tem is now in a su­per­po­si­tion of states. Same thing with ob­serv­ing the mea­sure­ment: what you ac­tu­ally ob­serve is a com­puter tel­ling you “spin-up” or “spin-down”. So that’s a gazillion atoms and molecules and par­ti­cles and what­not that’s differ­ent de­pend­ing sim­ply on the state of the elec­tron. Now sup­pose you some­how iso­lated that com­puter com­pletely from the out­side, so that not a sin­gle pho­ton left it, then you could say that the com­puter is in a su­per­po­si­tion. And as soon as you looked, so would you. The fact that you don’t ac­tu­ally see the com­puter ac­cus­ing both “spin-up” and “spin-down” or some com­bi­na­tion is just a con­se­quence of the fact that, while the whole sys­tem, in­clud­ing you, your brain, the com­puter, the room you’re in, the air you’re breath­ing, etc., is in a su­per­po­si­tion, the am­pli­tude for the two states to in­ter­act is in­finites­i­mal. For all in­tents and pur­poses, these two states have de­co­hered. That’s not to say su­per­po­si­tion is gone; it’s just to say that the am­pli­tude for those two states to in­ter­act is nearly zero.

A mul­ti­verser may re­spond: ‘But we’ve come so far! We’ve made such progress! Surely we de­serve to be treated as the stan­dard view by this point. All that’s left is the small prob­lem of ex­plain­ing the emer­gence of the real.’

Eh… I don’t know about that. I mean… well, I’ll come to that in a bit.

This is not to say that the quan­tum world is obliged to match our in­tu­itions. It is only to say that for an in­ter­pre­ta­tion to even qual­ify as an in­ter­pre­ta­tion, it will have to give some con­tent to its for­mal­ism. As con­tent goes, ‘world-split­ting’ and ‘Mag­i­cal Real­ity Fluid’ is not much of an im­prove­ment, if im­prove­ment it is, over ‘par­ti­cle’ and ‘wave.’

I’ll com­ment on it in a bit, too.

But I think most physi­cists who have a view on the is­sue as­sign a greater-than-.9 prob­a­bil­ity to their preferred var­i­ants of MW; and I haven’t seen ev­i­dence that they’ve grap­pled with the foun­da­tional ques­tions enough to war­rant that much con­fi­dence.

I think that is the same prob­lem I had with any other the­o­ries. The very idea of non-lo­cal­ity trig­gers alarm bells all over my brain. That > .9 prob­a­bil­ity to MW, I be­lieve, stems, at least par­tially, from an im­plicit < .01 prob­a­bil­ity to non-lo­cal­ity. So that re­ally leaves very lit­tle room for other in­ter­pre­ta­tions, and those, from what I’ve read, sound more bo­gus than Bohm.

[...] and in par­tic­u­lar in challeng­ing MW to be­come more rigor­ous and ex­plicit in what it means with all this world-talk. Bohm may sound un­fash­ion­ably 19th-cen­tury at times, but at least it never sounds mys­ti­cal.

I, per­son­ally, don’t think MW sounds all that “mys­ti­cal.” I guess that comes from hav­ing lived half my life in the 21st cen­tury, so even in fic­tion the no­tion of mul­ti­ple uni­verses has never been a scary, strange one. The ex­is­tence of a mul­ti­verse has always been a… per­sis­tent idea in my mind, and once I started read­ing up on Q.M. and study­ing the sub­ject I just gave form to that in­tu­ition. That be­ing said, I do agree with you that, at least from Wikipe­dia’s list of in­ter­pre­ta­tions, Bohm’s does look like the most solid al­ter­na­tive to MW.

And com­ing to my fi­nal point… the Born prob­a­bil­ities. I hon­estly, truly have not a clue where they come from. I am hop­ing that any fi­nal unified the­ory might be able to solve that lit­tle prob­lem (HA, lit­tle, right), but it wouldn’t be bad if some­one solved it from within Q.M. it­self. Some have tried, and I haven’t yet got­ten to the point where I be­lieve I am ready to read their at­tempts and truly grok what they mean so I can my­self judge my prob­a­bil­ity es­ti­mates.

I meant not sim­plest as in sim­plest sound bite, I meant in the way mr. Yud­kowsky has painfully ex­plained el­se­where when he treated Oc­cam’s Ra­zor. One sin­gle equa­tion is always a sim­pler propo­si­tion than two; and a whole in­tel­li­gent be­ing that sparked Ex­is­tence it­self and is not made of parts is so far off the map it’s not even worth con­sid­er­ing as a pre­limi­nary hy­poth­e­sis.

Yes, I grok. My point was that some the­ists don’t just think that God is sim­ple part­wise; they think that in some un­known (per­haps in­ef­fable) way he’s max­i­mally con­cep­tu­ally sim­ple, i.e., if we were smarter we could for­mu­late God in some­thing equa­tion-like and sud­denly un­der­stand why ev­ery­thing about him re­ally flows forth el­e­gantly from a profoundly sim­ple and uni­tary prop­erty. (And if ev­ery­thing else flows forth in­evitably from God, the the­ory as a whole is no more com­plex than its God-term. Of course, free-will-in­vok­ing var­i­ants will be ex­plana­to­rily in­el­e­gant by de­sign; sud­den in­ex­pli­ca­ble ‘choices’ will func­tion for liber­tar­i­ans like col­lapse func­tions for Copen­hagenists.)

Ob­vi­ously, this promise of be­ing able to for­mu­late God in con­cep­tu­ally (and not just mere­olog­i­cally) sim­ple terms is not cred­ible. But this was the point of my (ad­mit­tedly un­kind) anal­ogy; we should be wary of the­o­ries that promise an el­e­gant, unim­peach­ably Sim­ple re­duc­tion but have difficulty con­nect­ing that re­duc­tion to nor­mal­ity even in a sweep­ing, generic fash­ion. MW is ob­vi­ously much bet­ter in this re­gard than the­ism, but one of the prob­lems with the­ism (it promises a sim­ple re­duc­tion, but leaves the ‘sim­ple’ un­demon­strated) is in­ter­est­ingly analo­gous to the prob­lem with MW (it promises a sim­ple re­duc­tion, but leaves the ‘re­duc­tion’ un­demon­strated). I don’t take this to be a dis­tinct ar­gu­ment against MW; I just wanted to call it to at­ten­tion.

I think that is the same prob­lem I had with any other the­o­ries. The very idea of non-lo­cal­ity trig­gers alarm bells all over my brain. That > .9 prob­a­bil­ity to MW, I be­lieve, stems, at least par­tially, from an im­plicit < .01 prob­a­bil­ity to non-lo­cal­ity.

Fair enough. This per­haps is the fun­da­men­tal ques­tion: The naive in­ter­pre­ta­tion of data from EPR-style ex­per­i­ments is quite sim­ply that non­lo­cal cau­sa­tion (albeit not of the sort that can be used to trans­mit in­for­ma­tion) is in effect be­tween dis­tant en­tan­gled states. If your com­mit­ment to lo­cal­ity is strong enough, then you can re­cover lo­cal­ity by posit­ing that you’ve im­per­cep­ti­bly fallen into an­other world in in­ter­act­ing with one of the par­ti­cles, drag­ging ev­ery­thing around you into a some­how-dis­tinct com­po­nent of a larger, quasi-di­alethe­ist (re­ally, com­plex) re­al­ity. I don’t be­grudge those who pur­sue this path; I only en­courage care­ful scrutiny of ex­actly which pri­ors we’re ap­peal­ing to in tak­ing that first step away from the naive, su­perfi­cial in­ter­pre­ta­tion of the ex­per­i­men­tal re­sult that caused this as­pect of the prob­lem.

I, per­son­ally, don’t think MW sounds all that “mys­ti­cal.” I guess that comes from hav­ing lived half my life in the 21st cen­tury, so even in fic­tion the no­tion of mul­ti­ple uni­verses has never been a scary, strange one.

I don’t find the idea of clearly dis­tinct uni­verses mys­ti­cal or strange or scary. I do find it strange and very-nearly-in­co­her­ent to think of wor­lds ‘bleed­ing to­gether’ at the edges; and I very much won­der what it would be like to fully in­habit that in­ter­sec­tion be­tween wor­lds.

the Born prob­a­bil­ities. I hon­estly, truly have not a clue where they come from.

Note that on BM, the Born prob­a­bil­ities emerge from stochas­tic ini­tial par­ti­cle dis­tri­bu­tions; prob­a­bil­ities are epistemic, not meta­phys­i­cal (as they are in col­lapse). One can raise the fur­ther ques­tion ‘Why would a ran­dom dis­tri­bu­tion of par­ti­cles yield the Born statis­tics as op­posed to some other op­tion?’ Durr, Gold­stein, and Zanghi ac­count for this dis­tri­bu­tion in Quan­tum Equil­ibrium and the Ori­gin of Ab­solute Uncer­tainty. This spe­cific point is a strong rea­son to take Bohmian Me­chan­ics se­ri­ously.

BM re­quires some re­ally un­pleas­ant ini­tial com­mit­ments, but there don’t seem to be any spe­cial in­ter­pre­tive prob­lems, para­doxes, or un­solved prob­lems in BM, aside from the ‘or­di­nary’ leg­work re­quired in any gen­eral micro­phys­i­cal the­ory (e.g., we need a Bohmian QFT). BM has solved the Mea­sure­ment Prob­lem; MW merely has some re­ally sug­ges­tive hints that it might some­day offer a more el­e­gant solu­tion of its own.

The sole difficulty BM faces, in con­trast, is that it’s just kind of… ugly. Overtly, avowedly, un­abashedly ugly. (That’s re­ally what I re­spect most about the the­ory. It doesn’t hide its flaws; it defines it­self in terms of them.) But un­til these same prob­lems have been solved in at least one of BM’s com­peti­tors, we have no way of know­ing that some analo­gous ugli­ness (like ‘mag­i­cal re­al­ity fluid’) won’t be de­manded in the end in any em­piri­cally ad­e­quate in­ter­pre­ta­tion! Scary thought, eh? I also take se­ri­ously the ped­a­gog­i­cal util­ity of BM (in spite of its in­el­e­gance in prac­tice), as ex­pressed in the above pa­per: “Per­haps this pa­per should be read in the fol­low­ing spirit: In or­der to grasp the essence of Quan­tum The­ory, one must first com­pletely un­der­stand at least one quan­tum the­ory.” Even if BM is false, us­ing it as a naively con­crete read­ing of the QM for­mal­ism may help us bet­ter grasp the gen­eral struc­tural fea­tures that any em­piri­cally ad­e­quate QM in­ter­pre­ta­tion will need to pre­serve.

MW is ob­vi­ously much bet­ter in this re­gard than the­ism, but one of the prob­lems with the­ism (it promises a sim­ple re­duc­tion, but leaves the ‘sim­ple’ un­demon­strated) is in­ter­est­ingly analo­gous to the prob­lem with MW (it promises a sim­ple re­duc­tion, but leaves the ‘re­duc­tion’ un­demon­strated). I don’t take this to be a dis­tinct ar­gu­ment against MW; I just wanted to call it to at­ten­tion.

I guess we’ll have to wait un­til we have in­ter­stel­lar trav­els to ob­serve com­pletely su­per­posed civil­i­sa­tions so that we can ac­tu­ally see MW? That was a joke, by the way.

If your com­mit­ment to lo­cal­ity is strong enough, then you can re­cover lo­cal­ity by posit­ing that you’ve im­per­cep­ti­bly fallen into an­other world in in­ter­act­ing with one of the par­ti­cles, drag­ging ev­ery­thing around you into a some­how-dis­tinct com­po­nent of a larger, quasi-di­alethe­ist (re­ally, com­plex) re­al­ity. I don’t be­grudge those who pur­sue this path; I only en­courage care­ful scrutiny of ex­actly which pri­ors we’re ap­peal­ing to in tak­ing that first step away from the naive, su­perfi­cial in­ter­pre­ta­tion of the ex­per­i­men­tal re­sult that caused this as­pect of the prob­lem.

It’s not re­ally “fallen into an­other world” as much as “be­ing in a su­per­posed state.” If you as­sume that su­per­po­si­tion is a real effect of wave­func­tions (par­ti­cles), then you have to as­sume that you also be­long in states. The only way of es­cap­ing that is not be­liev­ing su­per­po­si­tion is an ac­tual, real effect, which to me looks like ex­actly what Bohm says.
Now I’m not say­ing that I give a > .9 prob­a­bil­ity to MW. It’s > .5, but I do not trust my own abil­ity to gauge my prob­a­bil­ity es­ti­mates the way you did.

I don’t find the idea of clearly dis­tinct uni­verses mys­ti­cal or strange or scary. I do find it strange and very-nearly-in­co­her­ent to think of wor­lds ‘bleed­ing to­gether’ at the edges; and I very much won­der what it would be like to fully in­habit that in­ter­sec­tion be­tween wor­lds.

Point. I think mr. Yud­kowsky men­tioned some­thing about a non-ex­is­tence of wor­lds at that in­ter­sec­tion? As in, the leak­age from the “larger” wor­lds is so big that the in­ter­sec­tion ceases ex­ist­ing, and then you have clearly dis­tinct uni­verses. Or at least that’s what I un­der­stood. I don’t think I like or even agree with the idea; it, too, sounds to me like try­ing to fit physics into in­tu­ition. But any­way, I agree with you that one of the main points in my head against MW is that in­ter­sec­tion. That, and what I men­tioned above, of com­pletely im­pos­si­ble situ­a­tions (like zom­bie Kennedy) never hav­ing hap­pened in recorded his­tory.

BM re­quires some re­ally un­pleas­ant ini­tial com­mit­ments, but there don’t seem to be any spe­cial in­ter­pre­tive prob­lems, para­doxes, or un­solved prob­lems in BM, aside from the ‘or­di­nary’ leg­work re­quired in any gen­eral micro­phys­i­cal the­ory (e.g., we need a Bohmian QFT). BM has solved the Mea­sure­ment Prob­lem; MW merely has some re­ally sug­ges­tive hints that it might some­day offer a more el­e­gant solu­tion of its own.

Point. Which is why I agree with you that BM is the only other se­ri­ous can­di­date. [whine]But those ini­tial com­mit­ments are re­ally un­pleas­ant.[/​whine]

The sole difficulty BM faces, in con­trast, is that it’s just kind of… ugly. Overtly, avowedly, un­abashedly ugly. (That’s re­ally what I re­spect most about the the­ory. It doesn’t hide its flaws; it defines it­self in terms of them.) But un­til these same prob­lems have been solved in at least one of BM’s com­peti­tors, we have no way of know­ing that some analo­gous ugli­ness (like ‘mag­i­cal re­al­ity fluid’) won’t be de­manded in the end in any em­piri­cally ad­e­quate in­ter­pre­ta­tion! Scary thought, eh?

Scary in­deed. Mag­i­cal re­al­ity fluid ac­tu­ally ter­rifies me, and if it turns out that MW re­quires it… well, I think I pre­fer non-lo­cal­ity to that.

They aren’t in­tu­itive molds, in the sys­tem-1 sense; ‘par­ti­cle’ and ‘wave’ are the­o­ret­i­cal constructs

I think that is pretty much the wrong way round. The only way you can model a di­men­sion­less par­ti­cle in QM
is as a diract delta func­tion, but they are math­e­mat­i­cally in­tractible (with a par­allel ar­gu­ment ap­ply­ing to pure waves), so in a sens there are no par­ti­cles or waves in QM, and what­ever w/​p du­al­ism is, it is not a du­al­ism of sharply defined op­po­sites, as would be im­plied by Bohr’s yin-yang sym­bol!

. But in fact I seem to ei­ther see spin-up or spin-down, not both.

In fact, you see macro­scopic poin­ter read­ings. That is an im­por­tant point, since Many Wor­lders think that the su­per­po­si­tion dis­ap­pers with macro­scopic de­coehrence.

The only way you can model a di­men­sion­less par­ti­cle in QM is as a [dirac] delta function

I wasn’t speci­fi­cally as­sum­ing di­men­sion­less par­ti­cles. Clas­si­cal atoms could be mod­eled par­tic­u­lately with­out be­ing points, pro­vided each can be picked out by a fixed po­si­tion and a mo­men­tum.

In fact, you see macro­scopic poin­ter read­ings. That is an im­por­tant point, since Many Wor­lders think that the su­per­po­si­tion dis­ap­pers with macro­scopic [de­co­her­ence].

Yes, this dis­tinc­tion is very im­por­tant for BM too. For ex­am­ple, BM ac­tu­ally fails the em­piri­cal ad­e­quacy test if you treat ‘spin-up’ and ‘spin-down’ as mea­surable prop­er­ties of par­ti­cles.

Ac­tu­ally, I’m some­what grate­ful that it was ig­nored (ex­cept by de Broglie), since its in­tu­itive­ness might oth­er­wise have be­come such a firm or­tho­doxy that we wouldn’t have the rich de­bate be­tween MW the­o­rists of to­day.

For in­stance, David Deutsch’s con­tri­bu­tion that BM is just MW with un­ece­sary ad­di­tional com­plex­ity.

Also, BM-style views pre­date Everett by decades, so one can’t make the case that BM is an ad-hoc dis­tor­tion of MW.

If one wishes. But MW and BM give con­trary an­swers to al­most ev­ery ques­tion, in spite of their mu­tual em­piri­cal ad­e­quacy. They’re suffi­ciently dis­tinct as to al­most qual­ify as alien physics—in­com­men­su­rate-yet-co­her­ent in the way you might ex­pect the the­o­ries of two in­de­pen­dent civ­i­liza­tions to be. That in it­self makes the act of try­ing to eval­u­ate and com­pare the two kinds of model Bayesi­anly ex­tremely use­ful and in­for­ma­tive. It re­ally gets to the heart of mak­ing some of our core pri­ors ex­plicit.

Bub sug­gests that a num­ber of tra­di­tional in­ter­pre­ta­tions of quan­tum the­ory can be char­ac­ter­ized as modal in­ter­pre­ta­tions if the ex­is­tence of a preferred ob­serv­able is al­lowed. Notable among them are the Dirac-von Neu­mann in­ter­pre­ta­tion, (what Bub takes to be) Bohr’s in­ter­pre­ta­tion, and Bohm’s the­ory. In the last case, Bub ar­gues that Bohm’s the­ory can be re­cov­ered as a modal in­ter­pre­ta­tion in which the R is the po­si­tion ob­serv­able.

There is an in­ter­est­ing fur­ther ques­tion about whether the modal con­cept of “pos­si­bil­ity” can be fur­ther re­duced… I guess Eliezer would ar­gue that it should be.

I just read Mr. Yud­kowsky’s ar­ti­cles on Boltz­mann Brains and the An­thropic trilemma… and I had thought of those ques­tions a while ago. While they’re not di­rectly re­lated to this com­ment, I guess I should com­ment about them here, too.

I have no prob­lem think­ing of my­self as a Boltz­mann Brain. Since most (if not all) such Brains will die an in­stant af­ter ex­ist­ing, I guess my ex­is­tence could be ac­cu­rately de­scribed as a string of Boltz­mann Brains in differ­ent re­gions of space­time, each con­tain­ing a small (not sure how small) slice of my ex­is­tence. Per­haps they all ex­ist at the same time. And the An­thropic Prin­ci­ple would ex­plain the illu­sion of con­ti­nu­ity, some­what. My main thoughts on the Boltz­mann Brain idea is that any hy­poth­e­sis that has no way to be tested even in prin­ci­ple is equiv­a­lent to the Null hy­poth­e­sis. I guess what I mean is, if I found out right now, with P ~ 1, that my ex­is­tence is a string of Boltz­mann Brains, that would not af­fect my pre­dic­tions. I’m not sure I should be think­ing this… be­cause this whole mat­ter con­fuses the hell out of me, but that’s my cur­rent men­tal state.

As for the An­thropic Trilemma… well, I guess it pretty much means mr. Yud­kowsky has the same doubts as I do. Very, very con­fus­ing busi­ness in­deed. Some­times I think I should just quit think­ing and be­come a strip­per. That was a joke, by the way.

Take the ap­ples and grind them down to the finest pow­der and sieve them through the finest sieve and then show me one atom of six­ness, one molecule of mul­ti­pli­ca­tion.

Disc­world refer­ence FTW. I would sus­pect that Pratch­ett’s Death, be­ing the sec­u­lar hu­man­ist and life en­thu­si­ast that he is, would strongly ap­prove of our efforts here to even­tu­ally ren­der him ir­rele­vant.

It may not be pos­si­ble to draw a sharp line be­tween things that ex­ist from the things that do not ex­ist. Surely there are prob­le­matic refer­ents (“the small­est triple of num­bers in lex­i­co­graphic or­der such that a^3+b^3=c^3”, “the his­tor­i­cal je­sus”, “the small­est pair of num­bers in lex­i­co­graphic or­der such that a^3+24=c^2”, “shake­speare’s first­born child”) that need con­sid­er­able work­ing with be­fore as­cer­tain­ing that they ex­ist or do not ex­ist. Given that difficulty, it seems like we work with ex­is­tence ex­plic­itly, as a the­ory; it’s not “baked in” to hu­man rea­son­ing.

Guy Steele wrote a talk called “Grow­ing a Lan­guage”, where one of his points is that build­ing hooks (such as func­tions) into the lan­guage defi­ni­tion to al­low the pro­gram­mer to grow the lan­guage is more im­por­tant than build­ing some­thing that is of­ten use­ful, say, com­plex num­bers or a rich col­lec­tion of string ma­nipu­la­tion prim­i­tives. Maybe talk­ing about the struc­ture of “the­o­ries of X” would be valuable. Per­haps all the­o­ries have ex­am­ples (in­clud­ing coun­terex­am­ples as a spe­cific kind of ex­am­ple) and rules (in­clud­ing defi­ni­tions as a spe­cific kind of ex­am­ple) - thats the kind of thing that I’m sug­gest­ing might be more like a hook.

I am not con­vinced—by this ar­ti­cle, at least—that there could only be two kinds of stuff. It sounds like the an­swer to the ques­tion, “why two and not one or pos­si­bly three ?” is, “be­cause I said so”, and that’s not very con­vinc­ing.

I am also not en­tirely sure what the Great Re­duc­tion­ist Pro­ject is, or why it’s im­por­tant.

Note that I’m not ar­gu­ing against re­duc­tion­ism, but solely against this post.

Could the Born prob­a­bil­ities be ba­sic—could there just be a ba­sic law of physics which just says di­rectly that to find out how likely you are to be in any quan­tum world, the in­te­gral over squared mod­u­lus gives you the an­swer? And the same law could’ve just as eas­ily have said that you’re likely to find your­self in a world that goes over the in­te­gral of mod­u­lus to the power 1.99999?

But then we would have ‘mixed refer­ences’ that mixed to­gether three kinds of stuff—the Schrod­inger Equa­tion, a de­ter­minis­tic causal equa­tion re­lat­ing com­plex am­pli­tudes in­side a con­figu­ra­tion space; log­i­cal val­idi­ties and mod­els; and a law which as­signed fun­da­men­tal-de­gree-of-re­al­ness a.k.a. mag­i­cal-re­al­ity-fluid. Mean­ingful state­ments would talk about some mix­ture of phys­i­cal laws over par­ti­cle fields in our own uni­verse, log­i­cal val­idi­ties, and de­gree-of-re­al­ness.

I guess I un­der­stand bet­ter now where your dis­like of the “shut up and calcu­late” non-in­ter­pre­ta­tion of QM is com­ing from. You re­fuse to ac­knowl­edge that the Born prob­a­bil­ities could be a man­i­fes­ta­tion of some deeper phys­i­cal law we do not yet know, and that the Schrod­inger equa­tion could be an­other man­i­fes­ta­tion of the same law, thus re­mov­ing the need for the “third thing”. The stan­dard re­ac­tion to what I just said is “but we don’t need any­thing else, just the Schrod­inger equa­tion”, and then pro­ceed to make ex­tra as­sump­tions equiv­a­lent to the Born rule, only more com­pli­cated.

In the first para­graph you quoted, EY ar­bi­trar­ily and pointlessly jux­ta­poses two differ­ent ques­tions. I say “pointlessly” char­i­ta­bly, be­cause if there is a point, it’s a bad one, to (guilt-by-)as­so­ci­ate an af­fir­ma­tive an­swer to the first, with an af­fir­ma­tive an­swer to the sec­ond.

Could the Born prob­a­bil­ities be ba­sic? “Could” would seem best in­ter­preted here as “for­mu­la­ble con­sis­tently with the two-fac­tor Great Re­duc­tion­ist ap­proach.” “Ba­sic” I’ll take as rel­a­tive to a model: if a law is de­rived in the model, it’s not ba­sic. Now that we know what the ques­tion is, the an­swer is: sure, why not? Phys­i­cal laws men­tion “elec­tric charge”, “time”, “dis­tance”; adding “prob­a­bil­ity” doesn’t seem to break any­thing, as long as the re­sult­ing the­ory is testable. That ba­si­cally prob­a­bil­is­tic the­ory might not be the most el­e­gant, but that’s a differ­ent ar­gu­ment. And there’s no need to top prob­a­bil­ities with fun­da­men­tal-de­gree-of-re­al­ness sauce.

I’m not say­ing or im­ply­ing that “any­thing that helps one make good pre­dic­tions, goes”. I re­ally don’t think in­stru­men­tal­ism is rele­vant here; if we take it off the table as an op­tion, there still doesn’t seem to be any rea­son to dis­pre­fer a the­ory that posits “ob­jec­tive prob­a­bil­ity” to one that posits “elec­tric charge”, aside from the over­all el­e­gance and ex­plana­tory power of the two the­o­ries. Which are rea­sons to in­cline to be­lieve that a the­ory is true, I take it, not just to see it as use­ful.

Great post as usual, Eliezer! I have to ad­mit that I never thought of log­i­cal and causal refer­ences be­ing mixed be­fore, but truly that is of­ten ex­actly how we use them.

I have one ques­tion, though: I read through the quan­tum physics se­quence, and I just don’t un­der­stand—why are the Born prob­a­bil­ities such a prob­lem? Aren’t there just blobs of am­pli­tude de­co­her­ing? Is the prob­lem that all the de­co­her­ence is already pre­dicted to hap­pen, with­out im­ply­ing the Born rule? If some­one could clar­ify this for me, I’d greatly ap­pre­ci­ate it.

I am not sure I am cor­rect, but if I’m not mis­taken, the prob­lem with the Born rule is that no one so far has suc­cess­fully (in the eyes of their peer physi­cists) proven they must be true. As in, they’re ad­di­tional. If you go by the stan­dard Copen­hagen in­ter­pre­ta­tion, since Col­lapse is already an ar­bi­trary ad­di­tional rule, it already sort of con­tains the Born prob­a­bil­ities: they’re just the ad­di­tional rules that ad­di­tion­ally con­di­tion how Col­lapse hap­pens. But any other the­o­ries that re­move ob­jec­tive, ad­di­tional Col­lapse from the pic­ture have this big prob­lem: why, oh, WHY do we get the Born prob­a­bil­ities?

Fur­ther­more, we have an even more in­ter­est­ing ques­tion: what do they even mean?! Sup­pose you (tem­porar­ily) ac­cept the Born prob­a­bil­ities. What are they prob­a­bil­ities of? Mean­ing: if there is a 75% chance that you will ob­serve a pho­ton po­larised in a given di­rec­tion, what does that mean, in the grand scheme? Are you di­vided into 100 copies of you, and 75 of them ob­serve such po­lari­sa­tion, while 25 of them don’t?

I am some­what con­fused about the na­ture of log­i­cal ax­ioms. They are not re­ducible to phys­i­cal laws, and phys­i­cal laws are not re­ducible to logic. So then, it what sense are they (ax­ioms) real? I don’t think you are say­ing that they are “out there” in some Pla­tonic sense, but it also seems like you are tak­ing a re­al­ist or quasi-em­piri­cal ap­proach to math/​logic.

Phys­i­cal laws are no more real than log­i­cal ax­ioms. Both are hu­man con­structs, started as mod­els used to ex­plain ob­ser­va­tions and grown to ac­com­mo­date other in­ter­ests. Just like the phys­i­cal law F=ma is a model to ex­plain why a heav­ier ball kicked with the same force does not speed up as much, the log­i­cal ax­iom of tran­si­tivity “ex­plains” why if you can trade sheep X for sheep Y and sheep Y for sheep Z, it is OK to trade sheep X for sheep Z in many cir­cum­stances.

Log­i­cal Ax­ioms are the rules that de­cide what can and can’t hap­pen. Then, our phys­i­cal world is one ap­pli­ca­tion of these to some start­ing phys­i­cal po­si­tion (and that may be log­i­cal defined too, read this post, or Good and Real).

Logic is use­ful when we have un­cer­tainty. If we are un­sure about a cer­tain vari­able, we can ex­trap­o­late to how the fu­ture will be given the differ­ent pos­si­bil­ites—the differ­ent vari­ables that are log­i­cally con­sis­tent within a causal uni­verse that fits with ev­ery­thing else we know. Of course, if we had no causal knowl­edge what­so­ever, then we’d not have any­thing with which to ap­ply logic (kinda like this post, with causal refer­ence be­ing emo­tions, and logic be­ing logic).

So, I’m say­ing that logic can define how ev­ery­thing that could be would work, which we de­duce from our uni­verse’s laws. If we have un­cer­tainty, then logic defines the pos­si­bil­ites. If we pre­tend to have only the knowl­edge of one law, like ‘1 + 1 = 2’, then we can find out more us­ing logic. And this is the study of math­e­mat­ics.

I think Log­i­cal Ax­ioms are the rules that de­cide what can and can’t hap­pen. Then, our phys­i­cal world is one ap­pli­ca­tion of these to some start­ing po­si­tion (and that may be log­i­cal defined too, read this post, or Good and Real).

No, log­i­cal ax­ioms are much too gen­eral from that. You need phys­i­cal laws to pro­joect the fu­ture state of the world, and they are much more spe­cific than log­i­cal ax­ioms.

Could you provide an ex­am­ple please? I must apol­o­gise, I’m not com­pe­tent with fun­da­men­tal laws of physics, but why can’t the most ba­sic laws (the ‘wave func­tion’ is ap­par­ently one of them) be speci­fied log­i­cally? Wouldn’t that just be a math­e­mat­i­cal de­scrip­tion of the first state of a uni­verse? Then that whole uni­verse, speci­fied by the sim­plest law(s) would be one uni­verse, and to those/​us within that world would only be able to be af­fected by the things causally con­nected.

Okay. I tried to re­spond here, but I’m not qual­ified to do so. I’ll just state what I’m think­ing, and then, if you could point out what I might be con­fused about, I’ll leave it there and might go read some books.

I think this is a con­fu­sion of defi­ni­tions. If ev­ery uni­verse is de­scribed in logic, then the phys­i­cal laws are a sub­set of those. So, logic de­scribes ev­ery­thing that is con­sis­tently pos­si­ble and then whichever uni­verse we’re in is a sub­set. Logic de­scribes how our uni­verse works. So the Great Re­duc­tion­ist Pro­ject is defin­ing which branch of log­i­cal de­scrip­tion space we are, and show­ing on the way that no part of the uni­verse is not de­scrib­able within logic.

No, if you buy a book on logic, it doens’t de­scribe the uni­verse.To get a de­scrip­tion of our uni­verse in math­e­mat­i­cal/​log­i­cal terms, you have to add in em­piri­cal in­for­ma­tion. There is a con­ve­nient
short­hand for that: physics. Physics de­scribed how our uni­verse works.

So the Great Re­duc­tion­ist Pro­ject is .. show­ing on the way that no part of the uni­verse is not de­scrib­able within logic.

Huh? How can it show that? Whether there is part of our uni­verse that is not de­scrib­able by logic is an em­piri­cal claim. Science could en­couner some­thig ir­re­ducible are any point.

Uhm, not re­ally. I’m not en­tirely sure what you mean by “math re­lies on things do­ing math”. Math isn’t about the think­ing ap­para­tus do­ing math. It’s a way of sys­tem­at­i­cally re­duc­ing the com­plex­ity of your men­tal mod­els—it re­places adding peb­bles and adding ap­ples with just adding.

If you imag­ine a uni­verse with 4 par­ti­cles in it, then 2+3 is still 5.

When you say math, are you talk­ing about the way ap­ples and stones in­ter­act and the states of the uni­verse af­ter­wards when the uni­verse performs “op­er­a­tions” on them? If so, then math is agent-in­de­pen­dent, as the world-state of 2+3 ap­ples will be five ap­ples re­gard­less of the ex­is­tence of some agent perform­ing “2+3=5″ in that uni­verse.

If you’re talk­ing about the ex­is­tence of the “rules of math­e­mat­ics”, our study of things and of count­ing, along with the knowl­edge and mod­els that said ab­stract study im­plies, then it does rely on agents hav­ing 2+3=5 mod­els, be­cause oth­er­wise there’s just a wor­ld­state with two blobs of par­ti­cles some­where, three blobs of par­ti­cles el­se­where, and then a wor­ld­state that brings the blobs to­gether and there’s a fi­nal wor­ld­state that doesn’t need “2+3=5″ to ex­ist, but re­quires an agent look­ing at the ap­ples and perform­ing “math­e­mat­ics” on their model of those blobs of par­ti­cles in or­der to es­tab­lish the model that two and three ap­ples will be five ap­ples.

In other words, what-we-know-as “math­e­mat­ics” would not have been in­vented if there were no agent us­ing a model to rep­re­sent re­al­ity, as math­e­mat­ics are ab­stract meth­ods of de­scrip­tion. How­ever, the uni­verse would con­tinue to be­have in the same man­ner whether we in­vented math­e­mat­ics or not, and as such the be­hav­iors im­plied by math­e­mat­ics when we say “2+3 ap­ples = 5 ap­ples” are in­de­pen­dent of agents.

So when an agent or com­put­ing de­vice performs an op­er­a­tion on real num­bers, say di­vi­sion of 1200 by 7, that re­sult is real, even though the in­stance of this di­vi­sion re­quires the agent to do it? The an­swer IS the only an­swer, but with­out an agent, there would not be a ques­tion in the first place?

That re­sult is log­i­cally valid and con­sis­tent, but does not have any new phys­i­cal real-ness that it didn’t already have—that is, its cor­re­la­tion and sys­tem­atic con­sis­tency with the rules of how the uni­verse works.

Thanks for the pa­per! I have started to read this and am ad­mit­tedly over­whelmed. I think I un­der­stand the con­cept, but with­out the abil­ity to un­der­stand the math, I feel limited in my scope to com­pre­hend this. Would you be able to give my a brief sum­mary of why we should ac­cept MUH and why it is con­tro­ver­sial?

We should be­lieve MUH be­cause it’s math­e­mat­i­cally im­pos­si­ble to con­sis­tently be­lieve in any­thing that’s not maths, be­cause be­liefs are made of maths and can’t re­fer to things that are not maths.

It’ con­tro­ver­sial be­cause hu­mans are crazy, and can’t ig­nore things ge­net­i­cally hard coded into their sub­con­scious no mat­ter how lit­tle sense it makes.

RDIT: Ap­pears I were stupid an in­ter­preted your ques­tion liter­ally in­stead of try­ing to make an ac­tual per­sua­sive ex­pla­na­tion.

Can’t re­ally help you with that, I ab­solutely suck at ex­plain things, es­pe­cially things I see as self ev­i­dent. I liter­ally can not imag­ine what it be­ing any other way would even mean, so I can’t ex­plain how to get from there to here.

be­liefs are made of maths and can’t re­fer to things that are not maths.

...to me that sounds like say­ing “words are made of let­ters and can’t re­fer to things that are not let­ters, there­fore e.g. trees and clouds must be made of let­ters.” It sounds like a map-ter­ri­tory con­fu­sion of in­sane de­gree.

The Math­e­mat­i­cal Uni­verse Hy­poth­e­sis may be true, but this ar­gu­ment doesn’t re­ally work for me.

This is just the same sort of prob­lem if you say that causal mod­els are mean­ingful and true rel­a­tive to a mix­ture of three kinds of stuff, ac­tual wor­lds, log­i­cal val­idi­ties, and coun­ter­fac­tu­als, and log­i­cal val­idi­ties.

You have a typo there, I think. “Log­i­cal val­idi­ties” ap­pears twice. If it’s not a type, the sen­tence is very un­clear.

Tan­gen­tial: I keep not un­der­stand­ing coun­ter­fac­tu­als in­tu­itively, not be­cause of the usual rea­son, but sim­ply be­cause if I take my best model of the past and re­run it to­ward­s­the fu­ture I do not arive at the pre­sent due to stochas­tic and chaos elevents.

Aka, try­ing to do the stan­dard math: I throw a 100 sided dice, it comes out 73, “If 2+2 were equal to 4, the dice would with 99% cer­tainty have come out 73”.

If 2+2 were equal to 4, the dice would with 99% cer­tainty [not] have come out 73

The state­ment is true, but be­cause mak­ing a state­ment in a con­ver­sa­tion is nor­mally taken to have a point, no­body would ever say such a thing. If it rings false to your ears, that’s your so­cial in­stincts rightly warn­ing you that mak­ing such a state­ment would be likely to de­ceive some­one.

Com­pare: my su­per-smart friend is study­ing for a test. I know he’ll ace it no mat­ter what. I wouldn’t tell him “if you go to bed now and get some sleep you’ll ace it to­mor­row”, and I wouldn’t tell him “if you study all night you’ll ace it”, de­spite both of those be­ing true. In ei­ther case he would think the first part of my state­ment was rele­vant.

Then how can any­one mean­ingfully talk about “what would have hap­pened if X had hap­pened in­stead of Y, Z years ago”, when there’d be billions of changes due to ran­dom­ness vastly larger than the kind of things hu­mans tend to re­spond to that type of ques­tion with, com­pletely drown­ing them out?

Com­pare: my su­per-smart friend is study­ing for a test. I know he’ll ace it no mat­ter what. I wouldn’t tell him “if you go to bed now and get some sleep you’ll ace it to­mor­row”, and I wouldn’t tell him “if you study all night you’ll ace it”, de­spite both of those be­ing true.

But this is be­cause the pur­pose of say­ing the above isn’t merely to in­form your friend of a true state­ment — it’s to con­vince him to get a good night’s sleep, in or­der to cause him to be well and happy.