ozziegooen(Ozzie Gooen)

For those read­ing, the main thing I’m op­ti­miz­ing Fore­told for right now, is for fore­cast­ing ex­per­i­ments and pro­jects with 2-100 fore­cast­ers. The spirit of mak­ing “quick and dirty” ques­tions for per­sonal use con­flicts a bit with that of mak­ing “well thought out and clear” ques­tions for group use. The lat­ter are messy to change, be­cause it would con­fuse ev­ery­one in­volved.

Note that Fore­told does sup­port full prob­a­bil­ity dis­tri­bu­tions with the guessti­mate-like syn­tax, which pre­dic­tion book doesn’t. But it’s less fo­cused on the quick in­di­vi­d­ual use case in gen­eral.

If there are recom­men­da­tions for sim­ple ways to make it bet­ter for in­di­vi­d­u­als; maybe other work­flows, I’d be up for adding some sup­port or in­te­gra­tions.

My im­pres­sion, af­ter some thought and dis­cus­sion (over the last ~1 year or so), is that peo­ple be­ing smarter /​ pre­dict­ing bet­ter will prob­a­bly de­crease the num­ber of wars and make them less ter­rible. That said, there are of course tails; per­haps some spe­cific wars could be far worse (one coun­try be­ing much bet­ter at de­stroy­ing an­other).

As I un­der­stand it, many wars in part started due to over­con­fi­dence; both sides are over­con­fi­dent on their odds of suc­cess (due to many rea­sons). If they were prop­erly cal­ibrated, they would more likely par­take in im­me­di­ate trades/​cons­es­sions or similar, rather than take fights, which are rather risky.

Similarly, I wouldn’t ex­pect differ­ent AGIs to phys­i­cally fight each other of­ten at all.

Thanks! I’ve looked at (2) a bit and some other work on In­for­ma­tion Ar­chi­tec­ture.

I’ve found it in­ter­est­ing but kind of old-school, it seems to have been a big deal when web tree nav­i­ga­tion was a big thing, and to have died down af­ter. It also seems pretty ap­plied; as in there isn’t a lot of con­nec­tion with aca­demic the­ory in how one could think about these clas­sifi­ca­tions.

More Nar­row Models of Credences

Epistemic Ri­gor
I’m sure this has been dis­cussed el­se­where, in­clud­ing on LessWrong. I haven’t spent much time in­ves­ti­gat­ing other thoughts on these spe­cific lines. Links ap­pre­ci­ated!

The cur­rent model of a clas­si­cally ra­tio­nal agent as­sume log­i­cal om­ni­science and pre­com­puted cre­dences over all pos­si­ble state­ments.

This is re­ally, re­ally bizarre upon in­spec­tion.

First, “log­i­cal om­ni­science” is very difficult, as has been dis­cussed (The Log­i­cal In­duc­tion pa­per goes into this).

Se­cond, all pos­si­ble state­ments in­clude state­ments of all com­plex­ity classes that we know of (from my un­der­stand­ing of com­plex­ity the­ory). “Cre­dences over all pos­si­ble state­ments” would eas­ily in­clude un­countable in­fini­ties of cre­dences. One could clar­ify that even ar­bi­trar­ily large amounts of com­pu­ta­tion would not be able to hold all of these cre­dences.

Pre­com­pu­ta­tion for things like this is typ­i­cally a poor strat­egy, for this rea­son. The of­ten-bet­ter strat­egy is to com­pute things on-de­mand.

A nicer defi­ni­tion could be some­thing like:

A cre­dence is the re­sult of an [ar­bi­trar­ily large] amount of com­pu­ta­tion be­ing performed us­ing a rea­son­able in­fer­ence en­g­ine.

It should be quite clear that calcu­lat­ing cre­dences based on ex­ist­ing ex­plicit knowl­edge is a very com­pu­ta­tion­ally-in­ten­sive ac­tivity. The naive Bayesian way would be to start with one piece of knowl­edge, and then perform a Bayesian up­date on each next piece of knowl­edge. The “pieces of knowl­edge” can be pri­ori­tized ac­cord­ing to heuris­tics, but even then, this would be a challeng­ing pro­cess.

I think I’d like to see speci­fi­ca­tion of cre­dences that vary with com­pu­ta­tion or effort. Hu­mans don’t cur­rently have effi­cient meth­ods to use effort to im­prove our cre­dences, as a com­puter or agent would be ex­pected to.

Solomonoff’s the­ory of In­duc­tion or Log­i­cal In­duc­tion could be rele­vant for the dis­cus­sion of how to do this calcu­la­tion.

Global Health

There’s a fair bit of re­sis­tance to long-term in­ter­ven­tions from peo­ple fo­cused on global poverty, but there are a few dis­tinct things go­ing on here. One is that there could be a dis­agree­ment on the use of dis­count rates for moral rea­son­ing, a sec­ond is that the long-term in­ter­ven­tions are much more strange.

No mat­ter which is cho­sen, how­ever, I think that the idea of “donate as much as you can per year to global health in­ter­ven­tions” seems un­likely to be ideal upon clever think­ing.

For the last few years, the cost-to-save-a-life es­ti­mates of GiveWell seem fairly steady. The S&P 500 has not been steady, it has gone up sig­nifi­cantly.

Even if you com­mit­ted to purely giv­ing to global heath, you’d be bet­ter off if you gen­er­ally de­layed. It seems quite pos­si­ble that if ev­ery life you would have saved in 2010, you could have saved 2 or more if you would have saved the money and spent it in 2020, with a de­cently typ­i­cal in­vest­ment strat­egy. (Ar­guably lev­er­age could have made this much higher.) From what I un­der­stand, the one life saved in 2010 would likely not have re­sulted in one ex­tra life equiv­a­lent saved in 2020; the re­turns per year was likely less than that of the stock mar­ket.

One could of course say some­thing like, “My dis­count rate is over 3-5% per year, so that out­weighs this benefit”. But if that were true it seems likely that the op­po­site strat­egy could have worked. One could have bor­rowed a lot of money in 2010, donated it, and then spent the next 10 years pay­ing that back.

Thus, it seems con­ve­niently op­ti­mal if one’s en­light­ened prefer­ences would sug­gest not ei­ther in­vest­ing for long pe­ri­ods or bor­row­ing.

EA Saving

One ob­vi­ous counter to im­me­di­ate dona­tions would be to sug­gest that the EA com­mu­nity fi­nan­cially in­vests money, per­haps with lev­er­age.

While it is difficult to tell if other in­ter­ven­tions may be bet­ter, it can be sim­pler to ask if they are dom­i­nant; in this case, that means that they pre­dictably in­crease EA-con­trol­led as­sets at a rate higher than fi­nan­cial in­vest­ments would.

A good metaphor could be to con­sider the fi­nances of cities. Hy­po­thet­i­cally, cities could in­vest much of their earn­ings near-in­definitely or at least for very long pe­ri­ods, but in prac­tice, this typ­i­cally isn’t key to their strate­gies. Often they can do quite well by in­vest­ing in them­selves. For in­stance, core in­fras­truc­ture can be ex­pen­sive but pre­dictably lead to sig­nifi­cant city rev­enue growth. Often these strate­gies area so effec­tive that they is­sue bonds in or­der to pay more for this kind of work.

In our case, there could be in­ter­ven­tions that are ob­vi­ously dom­i­nant to fi­nan­cial in­vest­ment in a similar way. An ob­vi­ous one would be ed­u­ca­tion; if it were clear that giv­ing or lend­ing some­one money would lead to pre­dictable dona­tions, that could be a dom­i­nant strat­egy to more generic in­vest­ment strate­gies. Many other kinds of com­mu­nity growth or value pro­mo­tion could also fit into this kind of anal­y­sis. Re­lated, if there were enough of these strate­gies available, it could make sense for loans to be made in or­der to pur­sue them fur­ther.

What about a non-EA growth op­por­tu­nity? Say, “vastly im­prov­ing sci­en­tific progress in one spe­cific area.” This could be dom­i­nant (to in­vest­ment, for EA pur­poses) if it would pre­dictably help EA pur­poses by more than the in­vest­ment re­turns. This could be pos­si­ble. For in­stance, per­haps a $10mil dona­tion to life ex­ten­sion re­search[1] could pre­dictably in­crease $100mil of EA dona­tions by 1% per year, start­ing in a few years.

One trick with these strate­gies is that many would fall into the bucket of “things a generic wealthy group could do to in­crease their wealth”; which is mediocre be­cause we should ex­pect that type of things to be well-funded already. We may also want in­ter­ven­tions that differ­en­tially change wealth amounts.

Kind of sadly, this seems to sug­gest that some re­sult­ing in­ter­ven­tions may not be “pos­i­tive sum” to all rele­vant stake­hold­ers. Many of the “pos­i­tive sum in re­spect to other pow­er­ful in­ter­est” in­ter­ven­tions may be funded, so the re­main­ing ones could be rel­a­tively neu­tral or zero-sum for other groups.

[1] I’m just us­ing life ex­ten­sion be­cause the ar­gu­ment would be sim­ple, not be­cause I be­lieve it could hold. I think it would be quite tricky to find great op­tions here, as is ev­i­denced by the fact that other very rich or pow­er­ful ac­tors would have similar mo­ti­va­tions.

I’m quite cu­ri­ous how this or­der­ing cor­re­lated with the origi­nal LessWrong Karma of each post, if that anal­y­sis hasn’t been done yet. Per­haps I’d be more cu­ri­ous to bet­ter un­der­stand what a great or­der­ing would be. I feel like there are mul­ti­ple fac­tors taken into ac­count when vot­ing, and it’s also quite pos­si­ble that the user­base rep­re­sents mul­ti­ple clusters that would have dis­tinct prefer­ences.

One nice thing about cases where the in­ter­pre­ta­tions mat­ter, is that the in­ter­pre­ta­tions are of­ten eas­ier to mea­sure than in­tent (at least for pub­lic figures). Authors can hide or lie about their in­tent or just never choose to re­veal it. In­ter­pre­ta­tions can be mea­sured us­ing sur­veys.

You are try­ing to es­ti­mate the EV of a doc­u­ment.
Here you want to un­der­stand the ex­pected and ac­tual in­ter­pre­ta­tion of the doc­u­ment. The in­ten­tion only mat­ters to how it effects the in­ter­pre­ta­tions.

You are try­ing to un­der­stand the doc­u­ment.Ex­am­ple: You’re read­ing a book on prob­a­bil­ity to un­der­stand prob­a­bil­ity.
Here the main thing to un­der­stand is prob­a­bly the au­thor in­tent. Un­der­stand­ing the in­ter­pre­ta­tions and mis­in­ter­pre­ta­tions of oth­ers is mainly use­ful so that you can un­der­stand the in­tent bet­ter.

You are try­ing to de­cide if you (or some­one else) should read the work of an au­thor.
Here you would ideally un­der­stand the cor­rect­ness of the in­ter­pre­ta­tions of the doc­u­ment, rather than that of the in­ten­tion. Why? Be­cause you will also be in­ter­pret­ing it, and are likely some­where in the range of peo­ple who have in­ter­preted it. For ex­am­ple, if you are told, “This book is ap­par­ently pretty in­ter­est­ing, but ev­ery sin­gle per­son who has at­tempted to read it, be­sides one, ap­par­ently couldn’t get any­where with it af­ter spend­ing many months try­ing”, or worse, “This au­thor is ac­tu­ally quite clever, but the vast ma­jor­ity of peo­ple who read their work mi­s­un­der­stand it in profound ways”, you should prob­a­bly not make an at­tempt; un­less you are highly con­fi­dent that you are much bet­ter than the men­tioned read­ers.

Com­mu­ni­ca­tion should be judged for ex­pected value, not in­ten­tion (by con­se­quen­tial­ists)

TLDR: When try­ing to un­der­stand the value of in­for­ma­tion, un­der­stand­ing the pub­lic in­ter­pre­ta­tions of that in­for­ma­tion could mat­ter more than un­der­stand­ing the au­thor’s in­tent. When try­ing to un­der­stand the in­for­ma­tion for other pur­poses (like, read­ing a math pa­per to un­der­stand math), this does not ap­ply.

If I were to scream “FIRE!” in a crowded the­ater, it could cause a lot of dam­age, even if my in­ten­tion were com­pletely un­re­lated. Per­haps I was re­spond­ing to a de­vi­ous friend who asked, “Would you like more pop­corn? If yes, should ‘FIRE!’”.

Not all speech is pro­tected by the First Amend­ment, in part be­cause speech can be used for ex­pected harm.

One com­mon defense of in­cor­rect pre­dic­tions is to claim that their in­ter­pre­ta­tions weren’t their in­ten­tions. “When I said that the US would fall if X were elected, I didn’t mean it would liter­ally end. I meant more that...” Th­ese kinds of state­ments were dis­cussed at length in Ex­pert Poli­ti­cal Judge­ment.

But this defense rests on the idea that com­mu­ni­ca­tors should be judged on in­ten­tion, rather than ex­pected out­comes. In those cases, it was of­ten clear that many peo­ple in­ter­preted these “ex­perts” as mak­ing fairly spe­cific claims that were later re­jected by their au­thors. I’m sure that much of this could have been pre­dicted. The “ex­perts” of­ten definitely didn’t seem to be go­ing out of their way to be mak­ing their af­ter-the-out­come in­ter­pre­ta­tions clear be­fore-the-out­come.

I think that it’s clear that the in­ten­tion-in­ter­pre­ta­tion dis­tinc­tion is con­sid­ered highly im­por­tant by a lot of peo­ple, so much so as to ar­gue that in­ter­pre­ta­tions, even pre­dictable ones, are less sig­nifi­cant in de­ci­sion mak­ing around speech acts than in­ten­tions. I.E. “The im­por­tant thing is to say what you truly feel, don’t worry about how it will be un­der­stood.”

But for a con­se­quen­tial­ist, this dis­tinc­tion isn’t par­tic­u­larly rele­vant. Speech acts are judged on ex­pected value (and thus ex­pected in­ter­pre­ta­tions), be­cause all acts are judged on ex­pected value. Similarly, I think many con­se­quen­tial­ists would claim that here’s noth­ing meta­phys­i­cally unique about com­mu­ni­ca­tion as op­posed to other ac­tions one could take in the world.

Some po­ten­tial im­pli­ca­tions:

Much of com­mu­ni­cat­ing on­line should prob­a­bly be about de­vel­op­ing em­pa­thy for the reader base, and a sense for what read­ers will mis­in­ter­pret, es­pe­cially if such mis­in­ter­pre­ta­tion is com­mon (which it seems to be).

Analy­ses of the in­ter­pre­ta­tions of com­mu­ni­ca­tion could be more im­por­tant than anal­y­sis of the in­ten­tions of com­mu­ni­ca­tion. I.E. un­der­stand­ing au­thors and artis­tic works in large part by un­der­stand­ing their effects on their view­ers.

It could be very rea­son­able to at­tempt to map non prob­a­bil­is­tic fore­casts into prob­a­bil­is­tic state­ments based on what read­ers would in­ter­pret. Then these fore­casts can be scored us­ing scor­ing rules just like those as reg­u­lar prob­a­bil­is­tic state­ments. This would go some­thing like, “I’m sure that Bernie San­ders will be elected” → “The read­ers of that state­ment seem to think the au­thor ap­ply­ing prob­a­bil­ity 90-95% to the state­ment ‘Bernie San­ders will win’” → a brier/​log score.

Note: Please do not in­ter­pret this state­ment as at­tempt­ing to say any­thing about cen­sor­ship. Cen­sor­ship is a whole differ­ent topic with dis­tinct costs and benefits.

For what it’s worth, I pre­dict that this would have got­ten more up­votes here at least with differ­ent lan­guage, though I re­al­ize this was not made pri­mar­ily for LW.

my per­sonal opinion is that LW shouldn’t cater to peo­ple who form opinions on things be­fore read­ing them and we should dis­cour­age them from hang­ing out here.

I think this is a com­pli­cated is­sue. I could ap­pre­ci­ate where it’s com­ing from and could definitely imag­ine things go­ing too far in ei­ther di­rec­tion. I imag­ine that both of us would agree it’s a com­pli­cated is­sue, and that there’s prob­a­bly some line some­where, though we may of course dis­agree on where speci­fi­cally it is.

A literal-ish in­ter­pre­ta­tion of your phrase there is difficult for me to in­ter­pret. I feel like I start with pri­ors on things all the time. Like, if I know an ar­ti­cle comes from The NYTimes vs. The Daily Stormer, that snip­pet of data it­self would give me what seems like use­ful data. There’s a ton of stuff on­line I choose not to read be­cause it seems to be from sources I can’t trust for rea­sons of source, or a quick read of head­line.

I would guess that one rea­son why you had a strong re­ac­tion, and/​or why sev­eral peo­ple up­voted you so quickly, was be­cause you/​they were wor­ried that my post would be un­der­stood by some as “cen­sor­ship=good” or “LessWrong needs way more polic­ing”.

If so, I think that’s a great point! It’s similar to my origi­nal point!

Things get mi­s­un­der­stood all the time.

I tried my best to make my post un­der­stand­able. I tried my best to con­di­tion it so that peo­ple wouldn’t mis­in­ter­pret or over­in­ter­pret it. But then my post was mi­s­un­der­stood (from what I can tell, un­less I’m se­ri­ously mi­s­un­der­stand­ing Ben here) liter­ally hap­pened within 30 min­utes.

Did you in­ter­pret me to say, “One should be sure that zero read­ers will feel offended?” I think that would clearly be in­cor­rect. My point was that there are cases where one may be­lieve that a bunch of read­ers may be offended, with rel­a­tively lit­tle cost to change things to make that not the case.

For in­stance, one could make lots of points that use alarmist lan­guage to poi­son the well, where the lan­guage is tech­ni­cally cor­rect, but very pre­dictably mi­s­un­der­stood.

I think there is ob­vi­ously some line. I imag­ine you would as well. It’s not clear to me where that line is. I was try­ing to flag that I think some of the lan­guage in this post may have crossed it.

Apolo­gies if my phras­ing was mi­s­un­der­stood. I’ll try chang­ing that to be more pre­cise.

I think I’m fairly un­com­fortable with some of the lan­guage in this post be­ing on LessWrong as such. It seems from the other com­ments that some peo­ple find some of the in­for­ma­tion use­ful, which is a pos­i­tive sig­nal. How­ever, there are 36 votes on this, with a net of +12, which is a pretty mixed sig­nal. My im­pres­sion is that few of the nega­tive vot­ers gave de­scrip­tive com­ments.

I think with any in­tense lan­guage the is­sue isn’t only “Is this effec­tive lan­guage to con­vey the point with­out up­set­ting an ideal reader”, but also some­thing like, “Given that there is a wide va­ri­ety of read­ers, are we suffi­ciently sure that this will gen­er­ally not need­lessly offend or up­set many of them, es­pe­cially in ways that could eas­ily be im­proved upon?”

I could imag­ine ca­sual read­ers quickly look­ing at this and as­sum­ing it’s re­lated to the PUA com­mu­nity or similar groups that have some sketchy con­no­ta­tions.

This pre­sents two challenges. First, any­one who makes this in­fer­ence may also as­sume that other writ­ers on LessWrong share similar be­liefs to what they think this kind of writ­ing sig­nals to them. Se­cond, it may at­tract other writ­ing that may be quite bad in ways we definitely don’t want.

I would sug­gest that in the fu­ture, posts ei­ther don’t use such dra­matic lan­guage here, or in the very least just done as link posts.

I’d be cu­ri­ous if oth­ers have takes on this is­sue; it’s definitely pos­si­ble my in­tu­itions are off here.