Speaking for myself (re: how the LW2.0 team communicates)

Posts made by mem­bers of the LessWrong 2.0 team are typ­i­cally made from the per­spec­tive of the in­di­vi­d­ual—even when they are writ­ing in their ca­pac­ity as LessWrong team mem­bers. My (Ruby’s) model of the team’s rea­son for this is that even if there are col­lec­tive de­ci­sions, there are no col­lec­tive mod­els. Not real mod­els.

When the team agrees to do some­thing, it is only be­cause enough of the in­di­vi­d­ual team mem­bers in­di­vi­d­u­ally have mod­els which in­di­cate it is the right thing to do. Our mod­els might be roughly the same at a high-level such that you can de­scribe a “com­mon de­nom­i­na­tor” model, but this isn’t an ac­tual model held by an ac­tual per­son. I think such “com­mon de­nom­i­na­tor” group mod­els are un­de­sir­able for at least the fol­low­ing rea­sons:

Pres­sure to form con­sen­sus re­duces the di­ver­sity of mod­els, some­times go­ing from mul­ti­ple mod­els per per­son to only a sin­gle model for the group. This can then re­sult in over­con­fi­dence in the sur­viv­ing model.

There might be no group model. The group might have agreed on a de­ci­sion, but they never reached con­sen­sus on the rea­sons for it.

It is costly to de­scribe group mod­els. Either some­one has to draft the model, get feed­back, make re­vi­sions, re­peat, un­til even­tu­ally it is “good enough” or some­one de­scribes a model pu­ta­tively held by the group, but which is not ac­tu­ally rep­re­sen­ta­tive of the group’s think­ing.

In fact, no in­di­vi­d­ual might en­dorse the group model as be­ing their own.

The per­son de­scribing the group model doesn’t nec­es­sar­ily un­der­stand things they’re in­clud­ing which came from oth­ers.

In av­er­ag­ing mul­ti­ple mod­els, de­tail is lost and you no longer have a model which can use­fully gen­er­ate pre­dic­tions.

No in­di­vi­d­ual owns the model, mak­ing it hard for any one per­son to elu­ci­date, defend, or be held ac­countable for it.

Reluc­tance to speak on the be­half of oth­ers means that very lit­tle gets said.

Cru­cially, group mod­els which get shared ex­ter­nally are very of­ten not the mod­els which were used to make de­ci­sions. If you want to un­der­stand a de­ci­sion, you want the ac­tual model which gen­er­ated it.

Given the goal of shar­ing our ac­tual true think­ing with the out­side world (rather than nicely cu­rated PR an­nounce­ments), the LessWrong team has the rough policy that we speak from our per­sonal point of view and avoid com­mit­ting to an im­per­sonal, au­thor­i­ta­tive, “offi­cial view of LessWrong.”

I sus­pect (and I be­lieve the team gen­er­ally agrees) that in­di­vi­d­ual team mem­bers post­ing from their own points of view will ul­ti­mately re­sult in the out­side world hav­ing a bet­ter un­der­stand­ing of our think­ing (in­di­vi­d­ual and col­lec­tive) than if we at­tempt to ag­gre­gate our in­di­vi­d­ual mod­els into the “or­ga­ni­za­tion’s mod­els”. Or­ga­ni­za­tions don’t have mod­els, peo­ple do.

That said, we talk a lot to each other and our mod­els are cor­re­lated. We tend to agree on the broad out­line of things, e.g. we agree at the crud­est level that LessWrong is about ra­tio­nal­ity and in­tel­lec­tual progress, even if we don’t agree on more de­tailed fram­ings and rel­a­tive em­pha­sis. We think roughly like each other, but don’t be sur­prised if a differ­ent team mem­ber says about a high-level vi­sion post I wrote that it’s not their model of it or that they don’t agree with ev­ery de­tail.

Seem­ingly, this com­mu­ni­ca­tion policy might al­low us (the LessWrong team) to weasel out of our pub­lic state­ments. “Oh, that’s just what Ruby said—the rest of us never said that.” This is far from the in­ten­tion. This policy is fo­cused on how we com­mu­ni­cate our rea­sons for do­ing things rather than state­ments about our com­mit­ments or ac­tions. If a LessWrong team mem­ber says the LessWrong team plans to do some­thing (es­pe­cially ma­jor di­rec­tions), it’s fair game to hold the en­tire team ac­countable for do­ing that thing.

(This is an un­re­lated ques­tion about LW that I’d like the LW team to see, but don’t think needs its own post, so I’m post­ing it here.) I want to men­tion that it re­mains frus­trat­ing when some­one says some­thing like “I’m open to ar­gu­ment” or we’re already in the mid­dle of a de­bate, I give them an ar­gu­ment, and just hear noth­ing back. I’ve ac­tu­ally kind of got­ten used to it a bit, and don’t feel as frus­trated as I used to feel but it’s still pretty much the strongest nega­tive emo­tion I ever ex­pe­rience when par­ti­ci­pat­ing on LW.

I be­lieve there are good rea­sons to ad­dress this aside from my per­sonal feel­ings, but I’m not sure if I’m be­ing ob­jec­tive about that. So I’m in­ter­ested to know whether this is some­thing that’s on the LW team’s radar as a prob­lem that could po­ten­tially be solved/​ame­lio­rated, or if they think it’s not worth solv­ing or prob­a­bly can’t be solved or it’s more of a per­sonal prob­lem than a com­mu­nity prob­lem. (See this old fea­ture sug­ges­tion which I be­lieve I’ve also re-sub­mit­ted more re­cently, which might be one way to try to ad­dress the prob­lem.)

Ray has re­cently been ad­vo­cat­ing for a more gen­eral tag­ging sys­tem (kinda like Github and Dis­cord but with tags op­ti­mized for LW-re­ac­tions like the ones in your fea­ture sug­ges­tion), and the LW team has been more se­ri­ously ex­plor­ing the idea of break­ing vot­ing down into two di­men­sions “agree/​dis­agree” + “ap­prove/​dis­ap­prove”. My guess is that both of those would help a bit, though for your use-case it also seems im­por­tant that you know the iden­tity of who re­acted what way.

I think both agree/​dis­agree and ap­prove/​dis­ap­prove are toxic di­men­sions for eval­u­at­ing qual­ity dis­cus­sions. Use­ful com­mu­ni­ca­tion is about ex­plain­ing and un­der­stand­ing rele­vant things, real-world truth and prefer­ence are sec­ondary dis­trac­tions. So lu­cid/​con­fused (as op­posed to clear/​un­clear) and rele­vant/​mis­lead­ing (as op­posed to in­ter­est­ing/​off-topic) seem like bet­ter choices.

I think both agree/​dis­agree and ap­prove/​dis­ap­prove are toxic di­men­sions for eval­u­at­ing qual­ity dis­cus­sions.

Hmm, but are they more toxic than what­ever “up­vote/​down­vote” cur­rently means? The big con­strain­ing fac­tor on things like this seems to me to be com­plex­ity and in­fer­en­tial dis­tance of what the vot­ing means. I would be wor­ried that it would be much harder to get peo­ple to un­der­stand “lu­cid/​con­fused” and “rele­vant/​mis­lead­ing” though I am not con­fi­dent.

Within the hy­po­thet­i­cal where the di­men­sions I sug­gest are bet­ter, fuzzi­ness of up­vote/​down­vote is bet­ter in the same way as un­cer­tainty about facts is bet­ter than in­cor­rect knowl­edge, even when the lat­ter is eas­ier to em­brace than cor­rect knowl­edge. In that hy­po­thet­i­cal, mov­ing from up­vote/​down­vote to agree/​dis­agree is a step in the wrong di­rec­tion, even if the step in the right di­rec­tion is too un­wieldy to be worth mak­ing.

[E]ven if there are col­lec­tive de­ci­sions, there are no col­lec­tive mod­els. Not real mod­els.

When the team agrees to do some­thing, it is only be­cause enough of the in­di­vi­d­ual team mem­bers in­di­vi­d­u­ally have mod­els which in­di­cate it is the right thing to do.

There’s some­thing kind of wor­ry­ing/​sad about this. One would hope that with a small enough group, you’d be able to have dis­cus­sion and Au­mann mag­ic­conver­gence lead to com­mon mod­els (and per­haps val­ues?) be­ing held by ev­ery­body. In this world, the pro­cess of mak­ing de­ci­sions is about gath­er­ing in­for­ma­tion from team mem­bers about the rele­vant con­sid­er­a­tions, and then a con­sen­sus emerges about what the right thing to do is, driven by con­sen­sus be­liefs about the likely out­comes. When you can’t do this, you end up in vot­ing the­ory land, where even if each in­di­vi­d­ual is ra­tio­nal, meth­ods to ag­gre­gate group prefer­ences about plans can lead to self-con­tra­dic­tory re­sults.

I don’t par­tic­u­larly have ad­vice for you here—pre­sum­ably you’ve already thought about the cost-benefit anal­y­sis of spend­ing marginal time on be­lief com­mu­ni­ca­tion—but the down­side here felt worth point­ing out.

I took the line writ­ten to mean that there are no “opinion lead­ers”. In a sys­tem where peo­ple could vote but ac­tu­ally trust some­one el­ses judge­ment the amount of votes doesn’t re­flect the amount of judge­ment pro­cesses em­ployed.

I also think that in a sys­tem that re­quires a con­sen­sus it be­comes tempt­ing to pro­duce a false con­sen­sus. This effect is strong enough that in all con­text where peo­ple bother with the con­cept of con­sen­sus there is enough ba­sis to sus­pect that it doesn’t form that there is a sig­nifi­cant chance that all par­tic­u­lar con­sen­suses are false. By al­low­ing a sys­tem of func­tion­ing to tol­er­ate non-con­sen­sus it be­comes prac­ti­cal to be the first one to break a con­sen­sus and the value of this is enough to see re­quiring con­sen­sus to be harm­ful.

All the while it be­ing true that while opinions di­verge there is real de­bate to be had.

FWIW, we spend loads of time on be­lief-com­mu­ni­ca­tion. This does mean (as Ruby says) that many of our be­liefs are the same. But some are not, and some­times the nu­ances mat­ter.

In this world, the pro­cess of mak­ing de­ci­sions is about gath­er­ing in­for­ma­tion from team mem­bers about the rele­vant con­sid­er­a­tions, and then a con­sen­sus emerges about what the right thing to do is, driven by con­sen­sus be­liefs about the likely out­comes.

This doesn’t seem very differ­ent from what we do, we just skip the step where ev­ery­one’s mod­els nec­es­sar­ily con­verge. We still con­verge on a course of ac­tion. (habryka is main de­ci­sion maker so in the event that con­sen­sus-about-the-rele­vant-de­tails doesn’t emerge, tends to de­fault to his judg­ment, or [em­piri­cally] to de­lay­ing ac­tion).

Even if they do con­verge (which they do quite fre­quently in sim­pler cases), I think the cor­rect model of the situ­a­tion is to say “I be­lieve X, as does ev­ery­one else on my team”, which is a much bet­ter state­ment than “we be­lieve X”, be­cause the phrase “we be­lieve” is usu­ally not straight­for­wardly in­ter­preted as “ev­ery­one on the team be­lieves that X is true” in­stead it usu­ally means “via a com­pli­cated ex­change of poli­ti­cal cap­i­tal we have agreed to act as if we all be­lieve X is true”.

I sec­ond Ray’s claim that we spend loads of time on be­lief com­mu­ni­ca­tion. Some­thing like the Au­mann con­ver­gence to com­mon mod­els might be be “the­o­ret­i­cally” doable, but I think it’d re­quire more than 100% of our time to get there. This is in­deed a bit sad and wor­ry­ing for hu­man-hu­man com­mu­ni­ca­tion.

This is in­deed a bit sad and wor­ry­ing for hu­man-hu­man com­mu­ni­ca­tion.

Is it newly sad and wor­ry­ing, though?

By con­trast, I find it re­as­sur­ing when some­one ex­plic­itly notes the goal, and the gap be­tween here and that goal, be­cause we have re­dis­cov­ered the mo­ti­va­tion for the com­mu­nity. 10 years deep, and still on track.

Hmm, I think you must have mi­s­un­der­stood the above sen­tence/​we failed to get the cor­rect point across. This is a state­ment about episte­mol­ogy that I think is pretty fun­da­men­tal, and is not some­thing that one can choose not to do.

In a sys­tem of mu­tual un­der­stand­ing, I have a model of your model, and you have a model of my model, but nev­er­the­less any pre­dic­tion about the world is a re­sult of one of our two mod­els (which might have con­verged, or at the very least in­clude parts of one an­other). You can have sys­tems that gen­er­ate pre­dic­tions and poli­cies and ac­tions that are not un­der­stood by any in­di­vi­d­ual (as is com­mon in many large or­ga­ni­za­tions), but that is the ex­act state you want to avoid in a small team where you can in­vest the cost to have ev­ery­thing be driven by things at least one per­son on the team un­der­stands.

The thing de­scribed above is some­thing you get to do if you can in­vest a lot of re­sources into com­mu­ni­ca­tion, not some­thing you have to do if you don’t in­vest enough re­sources.

In a sys­tem of mu­tual un­der­stand­ing, I have a model of your model, and you have a model of my model, but nev­er­the­less any pre­dic­tion about the world is a re­sult of one of our two mod­els (which might have con­verged, or at the very least in­clude parts of one an­other).

We can choose to live in a world where the model in my head is the same as the model in your head, and that this is com­mon knowl­edge. In this world, you could think about a pre­dic­tion be­ing made by ei­ther the model in my head or the model in your head, but it makes more sense to think about it as be­ing made by our model, the one that re­sults from all the in­for­ma­tion we both have (just like the in­te­ger 3 in my head is the same num­ber as the in­te­ger 3 in your head, not two num­bers that hap­pen to co­in­cide). If I be­lieved that this was pos­si­ble, I wouldn’t talk about how offi­cial group mod­els are go­ing to be im­pov­er­ished ‘com­mon de­nom­i­na­tor’ mod­els, or con­clude a para­graph with a sen­tence like “Or­ga­ni­za­tions don’t have mod­els, peo­ple do.”

In this world, you could think about a pre­dic­tion be­ing made by ei­ther the model in my head or the model in your head, but it makes more sense to think about it as be­ing made by our model …

I don’t think this ac­tu­ally makes sense. Models only make pre­dic­tions when they’re in­stan­ti­ated, just as al­gorithms only gen­er­ate out­put when run. And mod­els can only be in­stan­ti­ated in some­one’s head[1].

… the in­te­ger 3 in my head is the same num­ber as the in­te­ger 3 in your head, not two num­bers that hap­pen to co­in­cide …

This is a state­ment about philos­o­phy of math­e­mat­ics, and not ex­actly an un­con­tro­ver­sial one! As such, I hardly think it can sup­port the sort of rhetor­i­cal weight you’re putting on it…

[1] Or, if the model is suffi­ciently for­mal, in a com­puter—but that is, of course, not the sort of model we’re dis­cussing.

I think mod­els can be run on com­put­ers and I think peo­ple pass­ing pa­pers can work as com­put­ers. I do think it’s pos­si­ble to have an or­ga­ni­za­tion that does in­for­ma­tional work that none of it’s hu­man par­ti­ci­pants do. I do ap­pri­ci­ate that such work is of­ten very sec­ondary to the work that ac­tual in­di­vi­d­u­als do. But I think that if some­one ag­gres­sively tried to make a sys­tem that would sur­vive a “bad faith” hu­man ac­tor it might be pos­si­ble and even fea­si­ble.

I would phrase is that the num­ber 3 in my head and the num­ber 3 in your head both cor­re­spond to the num­ber 3 “out there” or to the “”com­mon so­cial” num­ber 3.

For ex­am­ple my num­ber 3 might par­ti­ci­pate in be­ing part of a in­put to a cached re­sults of mul­ti­pli­ca­tion ta­bles while I am not ex­pect­ing ev­ery­one else to do so.

The old philosph­i­cal prob­lem of whether the red I see the the same red that you see kind of high­lights how the reds could plau­si­bly be in­com­pa­rable while the prac­ti­cal re­al­ity that color talk is pos­si­ble is not in ques­tion.