Why is LessWrong not an Ama­zon af­fili­ate? I re­call buy­ing at least one book due to it be­ing men­tioned on LessWrong, and I haven’t been around here long. I can’t find any re­li­able data on the num­ber of ac­tive LessWrong users, but I’d guess it would num­ber in the 1000s. Even if only 500 are ac­tive, and as­sum­ing only 1⁄4 buy at least one book men­tioned on LessWrong, as­sum­ing a mean pur­chase value of $20 (books men­tioned on LessWrong prob­a­bly tend to­wards the aca­demic, ex­pen­sive side), that would work out at $375/​year.

IIRC, it only took me a few min­utes to sign up as an Ama­zon af­fili­ate. They (stupidly) re­quire a differ­ent ac­count for each Ama­zon web­site, so 5*4 min­utes (.com, .co.uk, .de, .fr), +20 for GeoIP database, +3-90 (wide range since cod­ing of­ten takes far longer than an­ti­ci­pated) to set up URL rewrit­ing (and I’d be happy to code this) would give a ‘worst case’ sce­nario of $173 an­nu­al­ized re­turns per hour of work.

Now, the math is some­what ques­tion­able, but the idea seems like a low-risk, low-in­vest­ment and po­ten­tially high-re­turn one, and I note that Metafilter and Stack­Overflow do this, though sadly I could not find any in­for­ma­tion on the re­turns they see from this. So, is there any rea­son why no­body has done this, or did no­body just think of it/​get around to it?

From your link, a fur­ther link doesn’t make it sound great at SO − 2-4x the ut­ter failure. But they are very pos­i­tive about it be­cause the cost of im­ple­men­ta­tion was very low. Just top-level posts or no ge­olo­cat­ing would be even cheaper.

A pos­si­bly rele­vant data point: I usu­ally post any links to books I put on­line with my ama­zon af­fili­ate link and in the last 3 months I’ve had around 25 clicks from links to books I be­lieve I posted in Less Wrong com­ments and no con­ver­sions.

The en­tireworldme­dia seems to have had a mass ra­tio­nal­ity failure about the re­cent suicides at Fox­conn. There have been 10 suicides there so far this year, at a com­pany which em­ploys more than 400,000 peo­ple. This is sig­nifi­cantly lower than the base rate of suicide in China. How­ever, ev­ery­one is up in arms about the ‘rash’, ‘spate’, ‘wave’/​what­ever of suicides go­ing on there.

When I first read the story I was read­ing a plau­si­ble ex­pla­na­tion of what causes these suicides by a guy who’s usu­ally pretty on the ball. Partly due to the neat­ness of the ex­pla­na­tion, it took me a while to re­al­ise that there was noth­ing to ex­plain.

Your strength as a ra­tio­nal­ist is your abil­ity to be more con­fused by fic­tion than by re­al­ity. It’s even harder to achieve this when the fic­tion comes ready-pack­aged with a plau­si­ble ex­pla­na­tion (es­pe­cially one which fits neatly with your poli­ti­cal views).

That’s what I thought as well, un­til I read this post from “Fake Steve Jobs”. Not the most re­li­able source, ob­vi­ously, but he does seem to have a point:

But, see, ar­gu­ments about na­tional av­er­ages are a smoke­screen. Sure, peo­ple kill them­selves all the time. But the Fox­conn peo­ple all work for the same com­pany, in the same place, and they’re all do­ing it in the same way, and that way hap­pens to be a grue­some, pub­lic way that makes a spec­ta­cle of their death. They’re not pill-tak­ers or wrist-slit­ters or hang­ers. … They’re jumpers. And jumpers, my friends, are a differ­ent breed. Ask any cop or shrink who deals with this stuff. Jumpers want to make a state­ment. Jumpers are try­ing to tell you some­thing.

Now I’m not en­tirely sure of the de­tails, but if it’s true that all the suicides in the re­cent cluster con­sisted of jump­ing off the Fox­conn fac­tory roof, that does seem to be more sig­nifi­cant than just 15 em­ploy­ees com­mit­ting suicide in un­re­lated in­ci­dents. In fact, it seems like it might even be the case that there are a lot more suicides than the ones we’ve heard about, and the cluster of 15 are just those who’ve kil­led them­selves via this par­tic­u­lar, highly visi­ble, method (I’m just spec­u­lat­ing here).

I’m not sure what to make of this—with­out know­ing more of the de­tails its prob­a­bly im­pos­si­ble to say what’s go­ing on. But the ba­sic point seems sound: that the ar­gu­ment about be­ing be­low na­tional av­er­age suicide rates doesn’t re­ally hold up if there’s some­thing spe­cific about a par­tic­u­lar group of in­ci­dents that makes them non-in­de­pen­dent. As an ex­am­ple, if the mem­bers of some cult com­mit suicide en masse, you can’t look at the re­gion the event hap­pened in and say “well the over­all suicide rate for the re­gion is still be­low the na­tional av­er­age, so there’s noth­ing to see here”

I was sur­prised when I read a statis­ti­cal anal­y­sis on na­tional death rates. When­ever there was a suicide by a par­tic­u­lar method pub­lished in news­pa­pers or on tele­vi­sion, deaths of that form spiked in the fol­low­ing weeks. This is de­spite the copy­cat deaths of­ten be­ing called ‘ac­ci­dents’ (ex­am­ples in­cluded crashed cars and aero­planes). Scary stuff (or very im­pres­sive statis­tics-fu).

Yes, this is con­nected to the ex­is­tence of suicide epi­demics. The most fa­mous ex­am­ple is the on­go­ing suicide epi­demic over the last fifty years in Microne­sia, where both the causes and meth­ods of suicide have been the same (hang­ing). See for ex­am­ple this dis­cus­sion.

If all the mem­bers of a cult com­mit­ted suicide then the lo­cal rate is 100%.

Fair enough—my ex­am­ple was poorly thought out in ret­ro­spect.

But I don’t think it’s cor­rect that there’s noth­ing to ex­plain. If it’s true that all 15 com­mit­ted suicide by the same method—a fairly rare method fre­quently used by peo­ple who are try­ing to make a pub­lic state­ment with their death—then there seems to be some­thing need­ing to be ex­plained. As Fake Steve Jobs points out later in the cited ar­ti­cle, if 15 em­ploy­ees of Wal­mart com­mit­ted suicide within the span of a few months, all of them by way of jump­ing off the roof of their Wal­mart, wouldn’t you think that was odd? Don’t you think that would be more sig­nifi­cant, and more de­serv­ing of an ex­pla­na­tion, than the same 15 Wal­mart em­ploy­ees com­mit­ting suicide in a va­ri­ety of lo­ca­tions, by a va­ri­ety of differ­ent meth­ods?

I’m not com­mit­ting to any par­tic­u­lar ex­pla­na­tion here (Dou­glas Knight’s sug­ges­tion, for one, sounds like a plau­si­ble ex­pla­na­tion which doesn’t in­volve any wrong­do­ing on Fox­conn’s part), I’m just say­ing that I do think there’s “some­thing to ex­plain”.

The first ques­tion that came to mind when I heard about this story was ‘what’s the base rate?’. I didn’t in­ves­ti­gate fur­ther but a quick men­tal es­ti­mate made me doubt that this rep­re­sented a statis­ti­cally sig­nifi­cant in­crease above the base rate. It’s dis­ap­point­ing yet un­sur­pris­ing that few if any me­dia re­ports even con­sider this point.

Wasn’t there a some­what well-pub­li­cized “spate” of suicides at a large French tele­com a while back? I re­mem­ber the ex­pla­na­tion be­ing the same—the num­ber ob­served was just about what you’d ex­pect for an em­ployer of that size.

Even if the suicide rate was some­what higher than av­er­age it still doesn’t nec­es­sar­ily tell you much. You should re­ally be look­ing at the prob­a­bil­ity of that num­ber of suicides oc­cur­ring in some dis­tinct sub­set of the pop­u­la­tion—given all the sub­sets of a pop­u­la­tion that you can iden­tify you will ex­pect some to have higher than suicide rates than for the pop­u­la­tion as a whole. The rele­vant ques­tion is ‘what is the prob­a­bil­ity that you would ob­serve this num­ber of suicides by chance in some ran­domly se­lected sub­set of this size?’

In­ci­den­tally the rate ap­pears to be be­low that of Cam­bridge Univer­sity stu­dents:

RESULTS: We iden­ti­fied 157 stu­dent deaths dur­ing aca­demic years 1970-1996, of which 36 ap­peared to be suicides. The over­all suicide rate was 11.3/​100,000 per­son years at risk. Suicide rates were similar to those seen amongst 15- to 24-year-olds in the gen­eral pop­u­la­tion. There were non-sig­nifi­cant trends for male post­grad­u­ates to be over-rep­re­sented and first-year un­der­grad­u­ates un­der-rep­re­sented. Ex­am­i­na­tion times were not as­so­ci­ated with ex­cess suicide. CONCLUSIONS: Suicide rates in Univer­sity of Cam­bridge stu­dents do not ap­pear to be un­duly high.

Yes, this is my counter-counter-crit­i­cism as well. ‘Sure, the over­all China rate may be the same, but what’s the suicide rate for young, em­ployed work­ers em­ployed by a tech­ni­cal com­pany with bright prospects? I’ll bet it’s lower than the over­all rate...’

Agreed. Also, I think what got the suicides in China in the news was that the vic­tim at­tributed the suicide speci­fi­cally to some weird policy or rule the com­pany ad­hered to. It could be that the “nor­mal” suicides at the com­pany are be­ing ig­nored, and the ones be­ing re­ported are the suicides on top of this, jus­tify­ing that con­cern that this is ab­nor­mal.

This was why I went look­ing for stats on suicides amongst uni­ver­sity stu­dents. I re­mem­bered some talk when I was at Cam­bridge of a high suicide rate, which you might see as some­what similarly counter-in­tu­itive to a high suicide rate for ‘young, em­ployed work­ers em­ployed by a tech­ni­cal com­pany with bright prospects’.

Ac­tu­ally, there are a num­ber of rea­sons to ex­pect a some­what ele­vated suicide rate in a rel­a­tively high pres­sure en­vi­ron­ment where large num­bers of young peo­ple have left home for the first time and are liv­ing in close prox­im­ity to large num­bers of strangers their own age. Sto­ries about high suicide rates at elite uni­ver­si­ties tend to take a very differ­ent tack to sto­ries about Chi­nese work­ers how­ever.

There’s a recre­ation cen­tre, but the en­g­ineers I was train­ing told me they had never been there. Then I saw on TV that there’s a stress room full of these dolls that look like Ja­panese war­riors. You get a bat and you beat them. That’s how they are en­couraged to re­lieve the stress.

Ya, I can see how some­thing like this could hap­pen. By the way, a few statis­tics don’t ex­actly prove any­thing. Was there 10 deaths last year? The year be­fore? Do other fac­to­ries have si­mil­iar prob­lems? Etc. To many vari­ables.

In­ci­den­tally, note that the ev­i­dence strongly sug­gests that ac­tively tak­ing out your ag­gres­sion ac­tu­ally in­creases rather than de­creases stress and ag­gres­sion lev­els. See for ex­am­ple, Berkow­itz’s 1970 pa­per “Ex­per­i­men­tal in­ves­ti­ga­tion of hos­tility cathar­sis” in the Jour­nal of Con­sult­ing and Clini­cal Psy­chol­ogy.

Marginal Revolu­tion linked to A Fine The­o­rem, which has sum­maries of pa­pers in de­ci­sion the­ory and other rele­vant econ, in­clud­ing the clas­sic “agree­ing to dis­agree” re­sults. A pa­per linked there claims that the prob­a­bil­ity set­tled on by Au­mann-agreers isn’t nec­es­sar­ily the same one as the one they’d reach if they shared their in­for­ma­tion, which is some­thing I’d been won­der­ing about. In ret­ro­spect this seems ob­vi­ous: if Mars and Venus only both ap­pear in the sky when the apoc­a­lypse is near, and one agent sees Mars and the other sees Venus, then they con­clude the apoc­a­lypse is near if they ex­change info, but if the prob­a­bil­ities for Mars and Venus are sym­met­ri­cal, then no mat­ter how long they ex­change prob­a­bil­ities they’ll both con­clude the other one prob­a­bly saw the same planet they did. The same thing should hap­pen in prac­tice when two agents figure out differ­ent halves of a chain of rea­son­ing. Do I have that right?

ETA: it seems, then, that if you’re ac­tu­ally pre­sented with a situ­a­tion where you can com­mu­ni­cate only by re­peat­edly shar­ing prob­a­bil­ities, you’re bet­ter off just con­vey­ing all your info by us­ing prob­a­bil­ities of 0 and 1 as Morse code or what­ever.

I thought of a sim­ple ex­am­ple that illus­trates the point. Sup­pose two peo­ple each roll a die pri­vately. Then they are asked, what is the prob­a­bil­ity that the sum of the dice is 9?

Now if one sees a 1 or 2, he knows the prob­a­bil­ity is zero. But let’s sup­pose both see 3-6. Then there is ex­actly one value for the other die that will sum to 9, so the prob­a­bil­ity is 1⁄6. Both play­ers ex­change this first es­ti­mate. Now cu­ri­ously al­though they agree, it is not com­mon knowl­edge that this value of 1⁄6 is their shared es­ti­mate. After hear­ing 1⁄6, they know that the other die is one of the four val­ues 3-6. So ac­tu­ally the prob­a­bil­ity is calcu­lated by each as 1⁄4, and this is now com­mon knowl­edge (why?).

And of course this es­ti­mate of 1⁄4 is not what they would come up with if they shared their die val­ues; they would get ei­ther 0 or 1.

Here is a re­mark­able vari­a­tion on that puz­zle. A tiny change makes it work out com­pletely differ­ently.

Same setup as be­fore, two pri­vate dice rolls. This time the ques­tion is, what is the prob­a­bil­ity that the sum is ei­ther 7 or 8? Again they will si­mul­ta­neously ex­change prob­a­bil­ity es­ti­mates un­til their shared es­ti­mate is com­mon knowl­edge.

I will leave it as a puz­zle for now in case some­one wants to work it out, but it ap­pears to me that in this case, they will even­tu­ally agree on an ac­cu­rate prob­a­bil­ity of 0 or 1. And they may go through sev­eral rounds of agree­ment where they nev­er­the­less change their es­ti­mates—per­haps re­lated to the phe­nomenon of “vi­o­lent agree­ment” we of­ten see.

Strange how this small change to the con­di­tions gives such differ­ent re­sults. But it’s a good ex­am­ple of how agree­ment is in­evitable.

But in re­al­ity, what hap­pens when peo­ple try to au­mann in­volves a differ­ent set of prob­lems, such as sta­tus-sig­nal­ling, es­pe­cially the idea that up­dat­ing to­ward some­one else’s prob­a­bil­ity is in­stinc­tively seen as giv­ing them sta­tus.

Ob­ser­va­tion: The may open thread, part 2, had very few posts in the last days, whereas this one has ex­ploded within the first 24 hours of its open­ing. I know I de­liber­ately with­held con­tent from it as once it is su­per­seded from a new thread, few would go back and look at the posts in the pre­vi­ous one. This would pre­dict a slow­ing down of con­tent in the open threads as the month draws to a close, and a sud­den burst at the start of the next month, a dis­tor­tion that is an ar­ti­fact of the way we or­ganise dis­cus­sion. Does any­body else fol­low the same rule for their open thread post­ings? Is there some­thing that should be done to solve this ar­tifi­cial throt­tling of dis­cus­sion?

I don’t post in the open threads much, but if I run into a good ra­tio­nal­ity quote I tend to wait un­til the next ra­tio­nal­ity quotes thread is opened un­less the cur­rent one is less than a week or so old.

‘Here is Eric Boyd’s talk about the de­vice he built called North Paw—a hap­tic com­pass an­klet that con­tin­u­ously vibrates in the di­rec­tion of North. It’s a pro­ject of Sense­bridge, a group of hack­ers that are try­ing to “make the in­visi­ble visi­ble”.’

To the pow­ers that be: Is there a way for the com­mu­nity to have some in­sight into the an­a­lyt­ics of LW? That could range from pe­ri­odic re­ports, to se­lec­tive ac­cess, to open ac­cess. There may be a good rea­son why not, but I can’t think of it. Beyond generic trans­parency brownie points, since we are a com­mu­nity in­ter­ested in pop­u­laris­ing the web­site, ac­cess to an­a­lyt­ics may pro­duce good, un­fore­seen in­sights. Also, au­thors would be able to see view­er­ship of their ar­ti­cles, and re­lated key­word searches, and so be bet­ter able to adapt their writ­ing to the au­di­ence. For me, a down­side of post­ing here in­stead of my own blog is the in­abil­ity to ac­cess an­a­lyt­ics. Ob­vi­ously i still post here, but this is a down­side that may not have to ex­ist.

So I’ve started draft­ing the very be­gin­nings of a busi­ness plan for a Less Wrong (book) store-ish type thingy. If any­body else is already work­ing on some­thing like this and is ad­vanced enough that I should not spend my time on this mini-pro­ject, please re­ply to this com­ment or PM me. How­ever, I would rather not be in­un­dated with ideas as to how to op­er­ate such a store yet: I may make a Less Wrong post in the fu­ture to gather ideas. Thanks!

In my ex­pe­rience, happy peo­ple tend to be more op­ti­mistic and more will­ing to take risks than sad peo­ple. This makes sense, be­cause we tend to be more happy when things are gen­er­ally go­ing well for us: that is when we can af­ford to take risks. I spec­u­late that the emo­tion of hap­piness has evolved for this very pur­pose, as a mechanism that reg­u­lates our risk aver­sion and makes us more will­ing to risk things when we have the re­sources to spare.

In­ci­den­tally, this would also ex­plain why peo­ple fal­ling in love tend to be in­tensly happy at first. In or­der to get and keep a mate, you need to be ready to take risks. Also, if hap­piness is cor­re­lated with re­sources, then be­ing happy sig­nals hav­ing lots of re­sources, in­creas­ing your prospec­tive mate’s chances of ac­cept­ing you. [...]

I was pre­vi­ously talk­ing with Will about the de­gree to which peo­ple’s hap­piness might af­fect their ten­dency to lean to­wards nega­tive or pos­i­tive util­i­tar­i­anism. We came to the con­clu­sion that peo­ple who are nat­u­rally happy might fa­vor pos­i­tive util­i­tar­i­anism, while nat­u­rally un­happy peo­ple might fa­vor nega­tive util­i­tar­i­anism. If this the­ory of hap­piness is true, then that makes perfect sense: risk aver­sion and a de­sire to avoid pain cor­re­sponds to nega­tive util­i­tar­i­anism, and will­ing­ness to tol­er­ate pain cor­re­sponds to pos­i­tive util­i­tar­i­anism.

Note that most Western hu­mans have a far greater ac­cess to re­sources than our an­ces­tors did, so we are likely all far more risk-averse than would be op­ti­mal given the en­vi­ron­ment.

Hi Kaj, I re­ally liked the ar­ti­cle. I had a rele­vant the­ory to ex­plain the per­ceived differ­ence of at­ti­tudes of north Euro­peans ver­sus south Euro­peans. I guess you could call it a the­ory of un­hap­piness. Here goes:

I take as granted that mildly de­pressed peo­ple tend to make more ac­cu­rate de­pic­tions of re­al­ity, that north Euro­peans have higher in­ci­dence of de­pres­sion and also much bet­ter func­tion­ing economies and democ­ra­cies. Given a low re­source en­vi­ron­ment, one needs to plan fur­ther, and make more ra­tio­nal pro­jec­tions of the fu­ture. If be­ing on the de­pres­sive side makes one more in­tro­spec­tive and thought­ful, then it would be con­ducive to hav­ing bet­ter long-term plans. In a sense, hap­piness could be greed-in­duc­ing, in a greedy al­gorithm sense. This more or less agrees with kaj’s the­ory. OTOH, not-hap­piness would en­courage long-term plan­ning and even more co-op­er­a­tive be­havi­our.

In the cur­rent en­vi­ron­ment, re­sources may not be scarce, but our world has be­come much more com­plex, ac­tions hav­ing much deeper con­se­quences than in the an­ces­tral en­vi­ron­ment (Nas­sim Ni­cholas Taleb makes this point in Black Swan) there­fore also need­ing bet­ter thought out courses of ac­tion. So north­ern Euro­peans have lucked out where their adap­ta­tion to cli­mate has been use­ful for the cur­rent re­al­ity. If one sees cor­rup­tion as a lo­cal-greedy be­havi­our as op­posed to lawful­ness as a global-co­op­er­a­tive be­havi­our, this would also ex­plain why go­ing closer to the equa­tor you gen­er­ally see an in­crease in cor­rup­tion and also failures in demo­cratic gov­ern­ment. Taken fur­ther, it would im­ply that near-equa­tor peo­ples are sim­ply not well-adapted to demo­cratic rule, which de­mands a cer­tain limit­ing of short-term in­di­vi­d­ual free­dom for the longer-term com­mon good, and a more dis­tributed/​lo­cal­ised form of gov­er­nance would do much bet­ter. I think this (ram­bling) the­ory can more or less be pieced to­gether with kaj’s, adding long-term plan­ning as a sec­ond di­men­sion.

Dis­claimer: Be­fore any­one ac­cuses me of dis­crim­i­na­tion, I am in fact a south Euro­pean (Greek), liv­ing in north Europe (the UK), and while this does not ab­solve me of all pos­si­bil­ity of racism against my own, this the­ory has formed from my effort to ex­plain the cul­tural differ­ences I ex­pe­rience on a daily ba­sis. Take it for what it’s worth.

If any given in­stance of dis­crim­i­na­tion in­creases the de­gree of cor­re­spon­dence be­tween your map and the ter­ri­tory, then there is no need for apol­ogy. Are these sorts of dis­claimers re­ally nec­es­sary here?

How does this make sense ex­actly? A happy per­son, with more re­sources, would be bet­ter off not tak­ing risks that could re­sult in him los­ing what he has. On the other hand, a sad per­son with few re­sources, would need to take more risks then the happy per­son to get the same re­sults. If you told a rich per­son, jump off that cliff and I’ll give you a mil­lion dol­lars, they prob­a­bly wouldn’t do it. On the other hand, if you told a poor per­son the same thing, they might do it as long as there was a chance they could sur­vive.

My idea of why peo­ple were happy wasn’t a static value of how many re­sources they had, but a com­par­a­tive value. A rich per­son thrown into poverty would be very un­happy, but the poor per­son might be happy.

How does this make sense ex­actly? A happy per­son, with more re­sources, would be bet­ter off not tak­ing risks that could re­sult in him los­ing what he has. On the other hand, a sad per­son with few re­sources, would need to take more risks then the happy per­son to get the same re­sults.

Kaj’s hy­poth­e­sis is a bit off: what he’s ac­tu­ally talk­ing about is the ex­plore/​ex­ploit trade­off. An an­i­mal in a bad (but not-yet catas­trophic) situ­a­tion is bet­ter off ex­ploit­ing available re­sources than scout­ing new ones, since in the EEA, any “bad” situ­a­tion is likely to be tem­po­rary (win­ter, im­me­di­ate pres­ence of a preda­tor, etc.) and it’s bet­ter to ride out the situ­a­tion.

OTOH, when re­sources are widely available, ex­plor­ing is more likely to be fruit­ful and worth­while.

The con­nec­tion to hap­piness and risk-tak­ing is more ten­u­ous.

If you told a rich per­son, jump off that cliff and I’ll give you a mil­lion dol­lars, they prob­a­bly wouldn’t do it. On the other hand, if you told a poor per­son the same thing, they might do it as long as there was a chance they could sur­vive.

I’d be in­ter­ested in see­ing the re­sults of that ex­per­i­ment. But “rich” and “poor” are even more loosely cor­re­lated with the vari­ables in ques­tion—there are un­happy “rich” peo­ple and un­happy “poor” peo­ple, af­ter all.

(In other words, this is all about in­ter­nal, in­tu­itive per­cep­tions of re­source availa­bil­ity, not ra­tio­nal as­sess­ments of ac­tual re­source availa­bil­ity.)

If I were to wa­ger a guess, the peo­ple who would ac­cept the deal are those who feel they are in a catas­trophic situ­a­tion.

Speak­ing of catas­trophic situ­a­tions, have you seen The Wages of Fear or any of the re­makes? I’ve only seen Sorcerer), but it was quite good. It’s a rather more re­al­is­tic situ­a­tion that jump­ing off a cliff, but the struc­ture is the same: a group of des­per­ate peo­ple driv­ing cases of ni­tro­glyc­erin-sweat­ing dy­na­mite across rough ter­rain to get enough money that they can es­cape.

Driv­ing in teams of two, they meet var­i­ous haz­ards on their jour­ney, in­clud­ing a di­lap­i­dated rope-sus­pen­sion bridge swing­ing vi­o­lently in a huge storm over a flood-swol­len river, a mas­sive tree block­ing the road, and a num­ber of des­per­ate, dan­ger­ous ban­dits.

I was kind of think­ing ex­pected value. In prin­ci­ple, if you always go by ex­pected value, in the long run you will end up max­i­miz­ing your value. But this may not be the best move to make if you’re low on re­sources, be­cause with bad luck you’ll run out of them and die even though you made the moves with the high­est ex­pected value.

How­ever, your ob­jec­tion does make sense and Eby’s re­for­mu­la­tion of my the­ory is prob­a­bly the su­pe­rior one, now that I think about it.

And a very con­densed note I wrote to my­self (in brain­stormish mode, with­out re­gard for fea­si­bil­ity or testa­bil­ity):

Emo­tions are filters on the brain, brain sub­sys­tems ac­ti­vated for differ­ent rea­sons in re­sponse to differ­ent cog­ni­tive stim­uli. This would ex­plain why those who are happy have a hard time re­mem­ber­ing things that are sad­den­ing or vice versa (pos­si­bly caus­ing cas­cades). It seems that flow is the op­po­site of suffer­ing, as both are re­sponses to difficult prob­lems such as the ones the brain evolved to solve. Pain asym­bo­lia may be the op­po­site of some­thing like bipo­lar di­s­or­der or mul­ti­ple per­son­al­ity di­s­or­der, and the differ­ence may be strength of emo­tion or the cog­ni­tive sub­sys­tems similar to emo­tion. It is odd that peo­ple who suffer are more of­ten nega­tive util­i­tar­i­ans: this is prob­a­bly be­cause the suffer­ing filter is af­fect­ing what sorts of mem­o­ries of ex­pe­rience they have ac­cess to, and bi­as­ing their thoughts in that di­rec­tion.

Searle has some weird be­liefs about con­scious­ness. Here is his de­scrip­tion of a “Fad­ing Qualia” thought ex­per­i­ment, where your neu­rons are re­placed, one by one, with elec­tron­ics:

… as the sili­con is pro­gres­sively im­planted into your dwindling brain, you find
that the area of your con­scious ex­pe­rience is shrink­ing, but that this shows no
effect on your ex­ter­nal be­hav­ior. You find, to your to­tal amaze­ment, that you
are in­deed los­ing con­trol of your ex­ter­nal be­hav­ior. You find, for ex­am­ple, that
when the doc­tors test your vi­sion, you hear them say, ‘‘We are hold­ing up a red
ob­ject in front of you; please tell us what you see.’’ You want to cry out, ‘‘I
can’t see any­thing. I’m go­ing to­tally blind.’’ But you hear your voice say­ing in a
way that is com­pletely out of your con­trol, ‘‘I see a red ob­ject in front of me.’’

I don’t have Searle’s book, and may be miss­ing some rele­vant con­text. Does Searle be­lieve nor­mal hu­mans with un­mod­ified brains can con­sciously af­fect their ex­ter­nal be­hav­ior?

If yes, then there’s a sim­ple solu­tion to this fear: do the ex­per­i­ment he de­scribes, and then grad­u­ally re­turn the test sub­ject to his origi­nal, all-biolog­i­cal con­di­tion. Ask him to de­scribe his ex­pe­rience. If he re­ports (now that he’s free of non-biolog­i­cal com­put­ing sub­strate) that he ac­tu­ally lost his sight and then re­gained it, then we’ll know Searle is right, and we won’t up­load. Noth­ing for Searle to fear.

But if, as I gather, Searle be­lieves that our “con­scious­ness” only ex­pe­riences things and is never a cause of ex­ter­nal be­hav­ior, then this is sub­ject to the same crit­i­cism as Searle’s sup­port of zom­bies.

Namely: if Searle is right, then the rea­son he is giv­ing us this warn­ing isn’t be­cause he is con­scious. Maybe in fact his con­scious­ness is scream­ing in­side his head, know­ing that his the­sis is false, but is un­able to stop him from pub­lish­ing his books. Maybe his con­scious­ness is already blind, and has been blind from birth due to a rare de­vel­op­men­tal ac­ci­dent, and it doesn’t know what words he types in his books at all. Why should we listen to him, if his words about con­scious ex­pe­rience are not caused by con­scious ex­pe­rience?

Searle thinks that con­scious­ness does cause be­hav­ior. In the scary story, the nor­mal cause of be­hav­ior is sup­planted, caus­ing the out­ward ap­pear­ance of nor­mal­ity. Thus, it’s not that con­scious­ness doesn’t af­fect things, but just that its effects can be mimicked.

Nisan’s crit­i­cism is dev­as­tat­ing, and has the ad­van­tage of not re­quiring tech­nolog­i­cal mar­vels to as­sess. I do like the el­e­gance of your sim­ple solu­tion, though.

He demon­strates very con­vinc­ingly that Searle’s view is in­co­her­ent ex­cept un­der the as­sump­tion of strong du­al­ism, us­ing an ar­gu­ment based on more or less the same ba­sic idea as your ob­jec­tion.

This com­ment got me think­ing about it. Of course LW be­ing a web­site can only deal with ver­bal­iz­able in­for­ma­tion(ra­tio­nal­ity). So what are we miss­ing? Skil­lsets that are not and have to be learned in other ways(prac­ti­cal ways): in­ter­per­sonal re­la­tion­ships be­ing just one of many. I also think the emo­tional brain is part of it. There might me peo­ple here who are brilli­ant thinkers yet emo­tion­ally mis­er­able be­cause of their per­sonal con­text or up­bring­ing, and I think deal­ing with that would be im­por­tant. I think a hol­lis­tic ap­proach is re­quired. Eliezer had already sug­gested the idea of a ra­tio­nal­ity dojo. What do you think?

I’ve been talk­ing to var­i­ous peo­ple about the idea of a Ra­tion­al­ity Foun­da­tion (work­ing ti­tle) which might end up spon­sor­ing or fa­cil­i­tat­ing some­thing like ra­tio­nal­ity do­jos. Need­less to say this idea is in its in­fancy.

I’m a drafts­man and it always struck me how ab­solutely ter­rible the English lan­guage is for talk­ing about lu­dicrously sim­ple vi­sual con­cepts pre­cisely. Words like par­allel and per­pen­dicu­lar should be one syl­la­ble long.

I won­der if there’s a way to ap­ply ra­tio­nal­ity/​ math­e­mat­i­cal think be­yond ge­om­e­try and to the world of art.

Ac­cord­ing to wiki: “Tacit knowl­edge (as op­posed to for­mal or ex­plicit knowl­edge) is knowl­edge that is difficult to trans­fer to an­other per­son by means of writ­ing it down or ver­bal­iz­ing it”

Thus: “Effec­tive trans­fer of tacit knowl­edge gen­er­ally re­quires ex­ten­sive per­sonal con­tact and trust. Another ex­am­ple of tacit knowl­edge is the abil­ity to ride a bi­cy­cle.”

Sup­ports the dojo idea...per­haps in Se­condLife once the graph­ics are bet­ter?

As some­one who learned cy­cling as a near-adult, the main in­sight is that you turn the wheel in the di­rec­tion in which the bike is fal­ling to push it back ver­ti­cal. Once I had been told that nega­tive-feed­back mechanism, the only de­lay was un­til I got frus­trated enough with go­ing slowly to say, “heck with this ‘rol­ling down a slight slope’ game, I’m just go­ing to turn the ped­als.” Where­upon I was gen­uinely rid­ing the bi­cy­cle.

...for about a minute, un­til I got the bright idea of try­ing to jump the curb. Did you know that rub­bing the knee off a pair of jeans will leave a streak of blue on con­crete?

Per my up­com­ing “Ex­plain Your­self!” ar­ti­cle, I am skep­ti­cal about the con­cept of “tacit knowl­edge”. For one thing, it puts up a sign that says, “Hey, don’t bother try­ing to ex­plain this in words”, which leads to, “This is a black box; don’t look in­side”, which leads to “It’s okay not to know how this works”.

Se­cond, tacit knowl­edge of­ten turns out to be ver­bal­iz­able, ques­tion­ing whether the term “tacit” is re­ally call­ing out a valid cluster in thingspace[1]. For ex­am­ple, take the canon­i­cal ex­am­ple of learn­ing to ride a bike. It’s true that you can learn it hands-on, us­ing the in­scrutable, pa­tient train­ing of the mas­ter. But you can also learn it by be­ing told the pri­mary coun­ter­in­tu­itive in­sights (“as long as you keep mov­ing, you won’t tip over”), and then a lit­tle prac­tice on your own.

In that case, the ver­bal knowl­edge has sub­sti­tuted one-for-one with (much of) the tacit learn­ing you would have gained on your own from prac­tice. So how much of it was “re­ally” tacit all along? How much of it are you just call­ing tacit be­cause the mas­ter never re­flected on what they were do­ing?

So for me, the ap­peal to “difficulty of ver­bal­iz­ing it” cer­tainly has some truth to it, but I find it mainly func­tions to ex­cuse one­self from crit­i­cal in­tro­spec­tion, and from open­ing im­por­tant black boxes. I ad­vise peo­ple to avoid us­ing this con­cept if re­motely pos­si­ble; it tends to say more about you than the in­her­ent in­scrutabil­ity of the knowl­edge.

[1] To some­one who sucks at pro­gram­ming, the abil­ity to re­vise a recipe to pro­duce more serv­ings is “tacit knowl­edge”.

As some­one who has made much of the con­cept of tacit knowl­edge in the past, I’ll have to say you have a point.

(I’m now con­sid­er­ing the ad­den­dum: “made much of it be­cause it served my in­ter­ests to pre­sent some knowl­edge I claimed to have as be­ing of that sort”. I’m not nec­es­sar­ily en­dors­ing that hy­poth­e­sis, just ac­knowl­edg­ing its plau­si­bil­ity.)

It still feels as if, once we toss that phrase out the win­dow, we need some­thing to take its place: words are not uni­ver­sally an effec­tive method of in­struc­tion, prac­tice clearly plays a vi­tal part in learn­ing (why?), and the hy­poth­e­sis that a learner re­con­structs knowl­edge rather than be­ing the re­cip­i­ent of a “trans­fer” in a literal sense strikes me as fa­cially plau­si­ble given the sum of my learn­ing ex­pe­riences.

Per­haps an adult can com­pre­hend “as long as you keep mov­ing, you won’t tip over”, but I have a strong in­tu­ition it wouldn’t go over very well with kids, de­pend­ing on age and dis­po­si­tions. My par­ent­ing ex­pe­rience (anec­do­tal ev­i­dence as it may be) backs that up. You need to see what a kid is do­ing right or wrong to en­courage the former and cor­rect the lat­ter, you need a hefty dose of pa­tience as the kid’s anx­ieties get in the way some­times for a long while.

Learn­ing to ride a bike is a canon­i­cal ex­am­ple be­cause it is taught early on, there is he­do­nic value in learn­ing it early on, but it is typ­i­cally taught at an age when a kid rarely (or so my hunch says) has the learn­ing-abil­ity to un­der­stand ad­vice such as “as long as you keep mov­ing, you won’t tip over”. There is such a thing as learn­ing to learn (and just how ver­bal­iz­able is that skill?).

It’s all too easy to over­gen­er­al­ize from a sparse set of ex­am­ples and ob­tain a sim­ple, el­e­gant, con­vinc­ing, but false the­ory of learn­ing. I hope your ar­ti­cle doesn’t fall into that trap. :)

It still feels as if, once we toss that phrase out the win­dow, we need some­thing to take its place: words are not uni­ver­sally an effec­tive method of in­struc­tion, prac­tice clearly plays a vi­tal part in learn­ing (why?), and the hy­poth­e­sis that a learner re­con­structs knowl­edge rather than be­ing the re­cip­i­ent of a “trans­fer” in a literal sense strikes me as fa­cially plau­si­ble given the sum of my learn­ing ex­pe­riences.

I don’t dis­agree, but I don’t see how it con­tra­dicts my po­si­tion ei­ther. The ev­i­dence you give against words be­ing effec­tive is that, ba­si­cally, they don’t fully con­strain what the other per­son is be­ing told to do, so they can always mess up in un­pre­dictable ways. That’s true, but it just shows how you need to un­der­stand the listener’s epistemic state to know which in­sights they lack that would al­low them to bridge the gap

Peo­ple do get this wrong, and end up giv­ing “let them eat cake” ad­vice—ad­vice that, if it were use­ful, the prob­lem would have been solved. But at the same time, a good un­der­stand­ing of where they are can lead to re­mark­ably in­for­ma­tive ad­vice. (I’ve no­ticed Roko and HughRistik are ex­cel­lent at this when it comes to hu­man so­cial­ity, while some are stuck in “let them eat cake” land.)

Per­haps an adult can com­pre­hend “as long as you keep mov­ing, you won’t tip over”, but I have a strong in­tu­ition it wouldn’t go over very well with kids, de­pend­ing on age and dis­po­si­tions.

Well, in my case, once it clicked for me, my thought was, “Oh, so if you just keep mov­ing, you won’t tip over, it’s only when you stop or slow down that you tip—why didn’t he just tell me that?”

It’s all too easy to over­gen­er­al­ize from a sparse set of ex­am­ples and ob­tain a sim­ple, el­e­gant, con­vinc­ing, but false the­ory of learn­ing. I hope your ar­ti­cle doesn’t fall into that trap. :)

Well, if it were a sparse set I wouldn’t be so con­fi­dent. I have a frus­trat­ingly long his­tory of peo­ple tel­ling me some­thing can’t be ex­plained or is re­ally hard to ex­plain, fol­lowed by me ex­plain­ing it to new­bies with rel­a­tive ease. And of cases where some­one ap­peals to their inar­tic­u­la­ble per­sonal ex­pe­rience for jus­tifi­ca­tion, when re­ally it was an ar­tic­u­la­ble hid­den as­sump­tion they could have found with a lit­tle effort.

Any­one is wel­come to PM me for an ad­vance draft of the ar­ti­cle if they’re in­ter­ested in giv­ing feed­back.

And of cases where some­one ap­peals to their inar­tic­u­la­ble per­sonal ex­pe­rience for jus­tifi­ca­tion, when re­ally it was an ar­tic­u­la­ble hid­den as­sump­tion they could have found with a lit­tle effort.

leaves me won­der­ing if you un­der­es­ti­mate how much effort it takes to no­tice and ex­press how to do things which are usu­ally non-ver­bal.

I don’t un­der­stand. The part you quoted isn’t about ex­press­ing how to do non-ver­bal things; it’s about peo­ple who say, “when you get to be my age, you’ll agree, [and no I can’t ex­plain what ex­pe­riences you have as you ap­proach my age that will cause you to agree be­cause that would re­quire a claim re­gard­ing how to in­ter­pret the ex­pe­rience which you have a chance of re­fut­ing]”

What does that have to do with the effort need to ex­press how to do non-ver­bal things?

Ex­cuse me—I wasn’t read­ing care­fully enough to no­tice that you’d shifted from claims that it was too hard to ex­plain non-ver­bal skills to claims that it was too hard to ex­plain the les­sons of ex­pe­rience.

Am I in­ter­pret­ing you cor­rectly that you are not deny­ing that some skills can only be learned by prac­tic­ing the skill (rather than by read­ing about or ob­serv­ing the skill) but are say­ing that ver­bal or writ­ten in­struc­tion is just as effec­tive as an aid to prac­tice as demon­stra­tion if done well?

I’m still a bit skep­ti­cal about this claim. When I was learn­ing to snow­board for ex­am­ple it was clear that some in­struc­tors were bet­ter able to ver­bal­ize cer­tain key in­for­ma­tion (keep your weight on your front foot, turn your body first and let the board fol­low rather than try­ing to turn the board, etc.) but I don’t think the ver­bal in­struc­tions would have been nearly as effec­tive if they were not ac­com­panied by phys­i­cal demon­stra­tions.

It’s pos­si­ble that a suffi­ciently good in­struc­tor could com­mu­ni­cate just as effec­tively through purely ver­bal in­struc­tion but I’m not sure such an in­struc­tor ex­ists. The fact that this is a rare skill also seems rele­vant even if it is pos­si­ble—there are many more in­struc­tors who can be effec­tive if they are al­lowed to com­bine ver­bal in­struc­tion with phys­i­cal demon­stra­tions.

Good points, but keep in mind snow­board­ing in­struc­tors aren’t op­ti­miz­ing the same thing that a ra­tio­nal­ist (in their ca­pac­ity as a ra­tio­nal­ist) is op­ti­miz­ing. If you just want to make money, quickly, and churn out good snow­board­ers, then use the best tools available to you—you have no rea­son to con­vert the in­struc­tion into words where you don’t have to.

But if you’re ap­proach­ing this as a ra­tio­nal­ist, who wants to open the black box and un­der­stand why cer­tain things work, then it is a tremen­dously use­ful ex­er­cise to try to ver­bal­ize it, and iden­tify the most im­por­tant things peo­ple need to know—knowl­edge that can al­low them to leapfrog a few steps in learn­ing, even and es­pe­cially if they can’t reach the Holy Grail of full trans­mis­sion of the un­der­stand­ing.

And I’d say (de­spite the first para­graph in this com­ment) that it’s a good thing to do any­way. I sus­pect that peo­ple’s in­abil­ity to ex­plain things stems in large part from a lack of try­ing—speci­fi­cally, a lack of try­ing to un­der­stand what men­tal pro­cesses are go­ing on in side of them that al­lows a skill to work like it does. They fail to imag­ine what it is like not to have this skill and as­sume cer­tain things are easy or ob­vi­ous which re­ally aren’t.

To more di­rectly an­swer your ques­tion, yes, I think ver­bal in­struc­tion, if it un­der­stands the epistemic state of the stu­dent, can re­place a lot of what nor­mally takes prac­tice to learn. There are things you can say that get some­one in just the right mind­set to by­pass a huge num­ber of er­rors that are nor­mally learned hands-on.

My main point, though, is that peo­ple severely over­es­ti­mate the ex­tent of their knowl­edge which can’t be ar­tic­u­lated, be­cause the in­cen­tives for such a self-as­sess­ment are very high. Most peo­ple would do well to avoid ap­peals to tacit knowl­edge, an in­stead in­tro­spect on their knowl­edge so as to gain a deeper un­der­stand­ing of how it works, la­bel­ing knowl­edge as “tacit” only as a last re­sort.

It’s pos­si­ble that a suffi­ciently good in­struc­tor could com­mu­ni­cate just as effec­tively through purely ver­bal in­struc­tion but I’m not sure such an in­struc­tor ex­ists.

I would sus­pect this has more to do with the skill of the stu­dent in trans­lat­ing ver­bal de­scrip­tions into mo­tions. You can perfectly un­der­stand a se­ries of mo­tions to be ex­e­cuted un­der var­i­ous con­di­tions, with­out hav­ing the mo­tor skill to as­sess the con­di­tions and ex­e­cute them perfectly in real-time.

For ex­am­ple, take that canon­i­cal ex­am­ple of learn­ing to ride a bike. It’s true that you can learn it hands-on, us­ing the in­scrutable, pa­tient train­ing of the mas­ter. But you can also learn it by be­ing told the pri­mary coun­ter­in­tu­itive in­sights (“as long as you keep mov­ing, you won’t tip over”), and then a lit­tle prac­tice on your own.

In that case, the ver­bal knowl­edge has sub­sti­tuted one-for-one with tacit learn­ing you would have gained on your own from prac­tice.

I’m look­ing for­ward to your ar­ti­cle, and I think that you’re right to em­pha­size the vast gap be­tween “un­ver­bal­iz­able” and “I don’t know at the mo­ment how to ver­bal­ize it”.

But, to re­ally pass the “bi­cy­cle test”, wouldn’t you have to be able to ex­plain ver­bally how to ride a bike so well that some­one could get right on the bike and ride perfectly on the first try? That is, wouldn’t you have to be able to elimi­nate even that “lit­tle prac­tice on your own”?

Or is there some part of be­ing able to ride a bike that you don’t count as knowl­edge, and which forms the in­e­liminable core that needs to be prac­ticed?

But, to re­ally pass the “bi­cy­cle test”, wouldn’t you have to be able to ex­plain ver­bally how to ride a bike so well that some­one could get right on the bike and ride perfectly on the first try? That is, wouldn’t you have to be able to elimi­nate even that “lit­tle prac­tice on your own”?

Depends on what the “bi­cy­cle test” is test­ing. For me, the fact that some­thing is staked out as a canon­i­cal, ground­ing ex­am­ple of tacit knowl­edge, and then is shown to be largely ver­bal­iz­able, blows a big hole in the con­cept. It shows that “hey, this part I can’t ex­plain” was ground­less in sev­eral sub­cases.

I do agree that some knowl­edge prob­a­bly de­serves to be called tacit. But given the ap­par­ent mas­sive rel­a­tivity of tac­it­ness, and the above ex­am­ple, it seems that these cases are so rare, you’re best off work­ing from the as­sump­tion that noth­ing is tacit, than from look­ing for cases that you can plau­si­bly claim are tacit.

It’s like any other case where one pos­si­bil­ity should be con­sid­ered last. If you do a ran­dom test on Gen­eral Rel­a­tivity and find it to be way off, you should first work from the as­sump­tion that you, rather than GR, made a mis­take some­where. Like­wise, if your in­stinct is to la­bel some of your knowl­edge as tacit, your first as­sump­tion should be, “there’s some way I can open up this black box; what am I miss­ing?”. Yes, these be­liefs could be wrong—but you need a lot more ev­i­dence be­fore re­ject­ing them should even be on the radar.

(And to be clear, I don’t claim my the­sis about tac­it­ness to de­serve the same odds as GR!)

some­thing is staked out as a canon­i­cal, ground­ing ex­am­ple of tacit knowl­edge, and then is shown to be largely verbalizable

Just to be clear, I don’t think it has been shown in the case of bike-rid­ing that the knowl­edge can be trans­ferred ver­bally. You can give some­one ver­bal in­struc­tion that will help them im­prove faster at bike-rid­ing, that isn’t at is­sue. It’s much less clear that tel­ling some­one the ac­tual con­trol al­gorithm you use when you ride a bike is suffi­cient to trans­form them from novice into profi­cient bike rider.

You can pro­gram a robot to ride a bike and in that sense the knowl­edge is ver­bal­iz­able, but look­ing at the source code would not nec­es­sar­ily be an effec­tive method of learn­ing how to do it.

I think be­ing able to ver­bally trans­mit the knowl­edge that solves most of the prob­lem for them is proof that at least some of the skill can be trans­ferred ver­bally. And of course it doesn’t help to tell some­one the de­tailed con­trol al­gorithm to ride a bike, and I wouldn’t recom­mend do­ing so as an ex­pla­na­tion—that’s not the kind of in­for­ma­tion they need!

One day, I think it will be pos­si­ble to teach some­one to ride a bike be­fore they ever use one, or even carry out similar ac­tions, though you might need a neu­ral in­ter­face rather than spo­ken words to do so. The first step in such a quest is to aban­don ap­peals to tacit knowl­edge, even if there are cases where it re­ally does ex­ist.

None, and no­body. I got a bi­cy­cle and tried to ride it un­til I could ride it. It took about three weeks from never hav­ing sat on a bi­cy­cle to con­fi­dently mix­ing with heavy traf­fic. (At the age of 22, btw. I never had a bi­cy­cle as a child.)

The first line that JoshB quoted from Wikipe­dia is fine—there is this class of knowl­edge—but I don’t agree with the sec­ond at all. Some things you can learn just by hav­ing a go un­tu­tored. Where an in­struc­tor is needed, e.g. in mar­tial arts, the only trust re­quired is enough con­fi­dence in the com­pe­tence of the teacher to do as he says be­fore you know why.

I guess that more peo­ple learn to ride a bike in child­hood than as adults, but I be­lieve that the usual method at any age is to get on it and ride it. There re­ally isn’t much you can do to teach some­one how to do it.

OK, so I sup­pose it doesn’t take much per­sonal con­tact and trust to ac­quire a skill of the bike-rid­ing type. In par­tic­u­lar if you’re an au­tonomous enough learner, in par­tic­u­lar if the skill is rel­a­tively ba­sic.

The origi­nal as­ser­tion, though, was about per­sonal con­tact and trust be­ing re­quired to trans­fer a skill of the bike-rid­ing type, and per­haps one rea­son to make this as­ser­tion is that the usual method in­volves a par­ent dis­pens­ing en­courage­ment and var­i­ous other forms of help, vis-a-vis a child. (I learnt it from my grand­father, and have a lot of pos­i­tive af­fect to ac­com­pany the mem­o­ries.)

Pro­vid­ing an en­vi­ron­ment in which learn­ing, an in­trin­si­cally risky ac­tivity, be­comes safe and plea­surable—I know from ex­pe­rience that this takes rap­port and trust, it doesn’t just hap­pen. Such an en­vi­ron­ment is per­haps not a pre­req­ui­site to ac­quiring a non-ver­bal­ized skill, but it does help a lot; as such it makes it pos­si­ble for peo­ple who would oth­er­wise give up on learn­ing be­fore they made it to the first plateau.

We must have had very differ­ent ex­pe­riences of many things. Tell me more about learn­ing be­ing risky. I have been learn­ing Ja­panese drum­ming since the be­gin­ning of last year (in a class), and stochas­tic calcu­lus in the last few months (from books), and “risky” is not a word it would oc­cur to me to ap­ply to ei­ther pro­cess. The only risk I can see in learn­ing to ride a bi­cy­cle is the risk of crash­ing.

One ma­jor risk in­volved in learn­ing is to your self-es­teem: feel­ing ridicu­lous when you make a mis­take, feel­ing frus­trated when you can’t get an ex­er­cise right for hours of try­ing, and so on.

As you note, in phys­i­cal ap­ti­tudes there is a non-triv­ial risk of in­jury.

There is the risk, too, of wast­ing a lot of time on some­thing you’ll turn out not to be good at.

Per­haps these things seem “safe” to you, but that’s what makes you a learner, in con­trast with large num­bers of peo­ple who can’t be both­ered to learn any­thing new once they’re out of school and in a job. They’d rather risk their skills be­com­ing ob­so­lete and end­ing up un­em­ploy­able than risk learn­ing: that’s how scary learn­ing is to most peo­ple.

One ma­jor risk in­volved in learn­ing is to your self-es­teem: feel­ing ridicu­lous when you make a mis­take, feel­ing frus­trated when you can’t get an ex­er­cise right for hours of try­ing, and so on.

I would say that the prob­lem then is with the in­di­vi­d­ual, not with learn­ing. Those feel­ings re­set on false be­liefs that no-one is born with. Those who ac­quire them learn them from un­for­tu­nate ex­pe­riences. Others chance to have more for­tu­nate ex­pe­riences and learn differ­ent at­ti­tudes. And some man­age in adult­hood to ex­pose their false be­liefs to the light of day, clearly per­ceive their falsity, and stop be­liev­ing them.

They’d rather risk their skills be­com­ing ob­so­lete and end­ing up un­em­ploy­able than risk learn­ing: that’s how scary learn­ing is to most peo­ple.

I doubt peo­ple are con­sciously mak­ing this de­ci­sion, but rather they aren’t calcu­lat­ing the po­ten­tial re­wards as op­posed to po­ten­tial risks well. A risk that is in the far fu­ture is of­ten taken less se­ri­ously than a small risk now.

Peo­ple who buy in­surance are demon­strat­ing abil­ity to trade off small risks now against big­ger risks in the fu­ture, but of­ten the same peo­ple in­vest less in keep­ing their pro­fes­sional skills cur­rent than they do in in­surance.

Per­sonal ex­pe­rience tells me that I had (and still have) a bunch of Ugh fields re­lated to learn­ing, which sug­gest that there are ac­tual nega­tive con­se­quences of en­gag­ing in the ac­tivity (per the the­ory of Ugh fields).

My hunch is that the per­ceived risks of learn­ing ac­counts in a sig­nifi­cant part for why peo­ple don’t in­vest in learn­ing, com­pared to the low per­ceived re­ward of learn­ing. I could well be wrong. How could we go about test­ing this hy­poth­e­sis?

I be­lieve that the usual method at any age is to get on it and ride it. There re­ally isn’t much you can do to teach some­one how to do it.

Are you se­ri­ous? I could never have learned to ride a bike with­out my par­ents spend­ing hours and hours try­ing to teach me. Did you also learn to swim by jump­ing into wa­ter and try­ing not to drown? I’d be very sur­prised if most peo­ple learned to ride a bike with­out in­struc­tion, but I may be un­usual.

Did you also learn to swim by jump­ing into wa­ter and try­ing not to drown?

There was ac­tu­ally at some point a the­ory that “ba­bies are born know­ing how to swim”, and on one oc­ca­sion at around age three, at a holi­day re­sort the fam­ily was stay­ing at, I was thrown into a swim­ming pool by a care­taker who sub­scribed to this the­ory.

It seems that af­ter that epi­sode no­body could get me to feel com­fortable enough in wa­ter to get any good at swim­ming (in spite of sum­mer va­ca­tions by the seaside for ten years straight, un­der the care of my grandad who taught me how to ride a bike). I only learned the ba­sics of swim­ming, mostly by my­self with ver­bal in­struc­tion from a few oth­ers, around age 30.

I could never have learned to ride a bike with­out my par­ents spend­ing hours and hours try­ing to teach me.

Maybe there’s a cul­tural differ­ence, but I don’t know what coun­try you’re in (or were in). I’ve never heard of any­one learn­ing to ride a bike ex­cept by rid­ing it. But clearly we need some ev­i­dence. I don’t care for the bodge of us­ing karma to con­duct a poll, so I’ll just ask any­one read­ing this who can ride a bi­cy­cle to post a re­ply to this com­ment say­ing how they learned, and in what coun­try. “Taught” should mean ac­tive in­struc­tion, some­thing more than just some­one be­ing around to provide com­fort for scrapes and to keep chil­dren out of traf­fic un­til they’re ready.

Re­sults so far:

RichardKen­n­away: self-taught as adult, late 70′s, UK

Morendil: taught in child­hood by grand­father, UK?

Blue­berry: taught in child­hood by par­ents, where?

So that’s two to one against my cur­rent view, but those replies may be bi­ased: other self-taught peo­ple will not have had as strong a rea­son to post agree­ment.

I dont’t know how much this will sup­port your po­si­tion, but: mid 1980s, Texas, USA, by my father.

And as I said above, it did take a while to learn, but af­ter­ward, my re­ac­tion was, “Wait—all I have to do is keep in mo­tion and I won’t fall over. Why didn’t he just say that all along?” That be­gan my long his­tory of en­coun­ter­ing peo­ple who over­es­ti­mate the difficulty of, or fail to sim­plify the pro­cess to teach­ing or jus­tify­ing some­thing.

ETA: Also, I haven’t rid­den a bike in over 15 years, so that might be a good test of whether my “just keep in mo­tion” heuris­tic al­lows me to pre­serve the knowl­edge.

Yeah, I wasn’t so sure it would be a good test. Still, I’m not sure how well the “you don’t for­get how to learn a bike” hy­poth­e­sis is tested, nor how much of its un­for­get­ta­bil­ity is due to the sim­plic­ity of the key in­sights.

I don’t dis­agree, but there’s typ­i­cally a bar­rier, in­creas­ing with time since last use, that must be over­come to re-ac­cess that kines­thetic knowl­edge. And think ver­bal heuris­tics like the one I gave can greatly shorten the time you need to com­plete this pro­cess.

early 90s, US. I also had train­ing wheels for a while first, which didn’t ac­tu­ally teach me any­thing. I didn’t learn un­til they were re­moved. And I also had some­one run­ning along for re­as­surance.

There’s some vari­a­tion in method of in­struc­tion. My grandpa had fit­ted my bike with a long han­dle in the back and used that to help me bal­ance af­ter tak­ing the train­ing wheels off. With one of my kids I tried the method of grad­u­ally lift­ing the train­ing wheels to make the bal­ance more pre­car­i­ous over time. One of the other two just “got it”, as I re­mem­ber, in one or two ses­sions. Other­wise it was the stan­dard rid­ing down a slight slope and ad­vis­ing them “keep your feet on the ped­als”, and run­ning alongside for re­as­surance.

The truth is, that’s how most skil­led artists learned to draw. In the past, there was a more for­mal­ized teach­ing role, of­ten start­ing at age eight, and you can go through school and even get through art school hav­ing been given so lit­tle knowl­edge, that if you know how to draw a hu­man from imag­i­na­tion, you can con­fi­dently say you are an au­to­di­dact.

It’s not be­cause art, (par­tic­u­larly rep­re­sen­ta­tional figure draw­ing, from imag­i­na­tion or not) is in­her­ently un­teach­able, but a lot of peo­ple tend to think so.

This is not the only skill like this, al­though I think it’s one that’s per­haps the least un­der­stood and where mis­in­for­ma­tion is the most tol­er­ated.

It’s not so much, “Such in­solence, our ideas are so awe­some they can not be bro­ken down by mere re­duc­tion­ism” as “Wow, words are re­ally bad at de­scribing things that are very differ­ent from what most of the peo­ple speak­ing the lan­guage do.”

I think you could make an elab­o­rate set of equa­tions on a carte­sian graph and come up with a draw­ing that looked like it and say fill up RGB val­ues #zzzzzz at co­or­di­nates x,y or what­ever, but that seems like a copout since that doesn’t tell you any­thing about how Frag­o­nard did it.

This re­minds me of an ex­er­cise we did in school. (I don’t re­mem­ber ei­ther when or for what sub­ject.)

Every­one was to make a rel­a­tively sim­ple image, com­posed of lines, cir­cles, tri­an­gles and the such. Then, with­out show­ing one’s image to the oth­ers, each of us was to de­scribe the image, and the oth­ers to draw ac­cord­ing to the de­scrip­tion. The “tar­get” was to ob­tain re­pro­duc­tions as close as pos­si­ble to the origi­nal image. It’s sur­pris­ingly hard.

It’s was a very in­ter­est­ing ex­er­cise for all in­volved: It’s sur­pris­ingly hard to de­scribe pre­cisely, even given the quite sim­ple draw­ings, in such a way that ev­ery­one in­ter­prets the de­scrip­tion the way you in­tended it. I vaguely re­mem­ber I did quite well com­pared with my class­mates in the de­scribing part, and still had sev­eral “tran­scrip­tions” that didn’t look any­where close to what I was say­ing.

I think the les­son was about the im­por­tance of clear speci­fi­ca­tions, but then again it might have been just some­thing like English (for­eign lan­guage for me) vo­cab­u­lary train­ing.

An ex­am­ple:

Draw a square, with hori­zon­tal & ver­ti­cal sides. Copy the square twice, once above and once to the right, so that the two new squares share their bot­tom and, re­spec­tively, left sides with the origi­nal square. In­side the right­most square, touch­ing its bot­tom-right cor­ner, draw an­other square of half the origi­nal’s size. (Thus, the small square shares its bot­tom-right cor­ner with its host, and its top-left cor­ner is on the cen­ter of its host.) In­side the top­most square, draw an­other half-size square, so that it shares both di­ag­o­nals with its host square. Above the same top­most square, draw an isosce­les right-an­gled tri­an­gle; its sides around the right an­gle are the same length as the large squares’; its hy­potenuse is hori­zon­tal, just touch­ing the top side of the top­most square; its right an­gle points up­wards, and is hori­zon­tally al­igned with the cen­ter of the origi­nal square. (Thus, the origi­nal square, its copy above, and the tri­an­gle above that, should form an up­wards-point­ing ar­row.) Then make a copy of ev­ery­thing you have, to the right of the image, mir­rored hori­zon­tally. The copy should be ver­ti­cally al­igned with the origi­nal, and share its left-most line with the right-most line of the origi­nal.

Try to fol­low the in­struc­tions above, and then com­pare your draw­ing with the non-num­bered part of this image.

The ex­er­cise we did in school was a bit harder: the images had fewer parts (a rec­t­an­gle, an el­lipse, a tri­an­gle, and a cou­ple lines, IIRC), but with more com­plex re­la­tion­ships for al­ign­ment, sizes and an­gles.

You could prob­a­bly get pretty good re­sults with­out mess­ing with com­plex equa­tions, by first de­scribing the full pic­ture, then de­scribing what’s in four quad­rants made by draw­ing ver­ti­cal and hori­zon­tal lines that split the image ex­actly in half, then de­scribing quad­rants of these quad­rants, split in a similar way and so on. The artist could use their skills to draw the de­tails with­out an in­sanely com­plex en­cod­ing scheme, and the grid dis­ci­pline would help fix the large-scale ge­om­e­try of the image.

Edit: A 3x3 grid might work bet­ter in prac­tice, it’s more nat­u­ral to work with a cen­ter re­gion than to put the split point right in the mid­dle of the image, which most prob­a­bly con­tains some­thing in­ter­est­ing. On the other hand, maybe the lines break­ing up the rec­og­niz­able shapes in the pic­ture (already de­scribed in ca­sual terms for the above-level de­scrip­tion) would help bring out their ge­o­met­ri­cal prop­er­ties bet­ter.

Edit 2: Michael Baxan­dall’s book Pat­terns of In­te­tion has some great stuff on us­ing lan­guage to de­scribe images.

As a teach­ing tool for peo­ple who can’t draw, I haven’t seen it be effec­tive, but it’s awe­some if you’ve got a dead­line and don’t want to spend all your time check­ing and recheck­ing your pro­por­tions.I doubt it would be effec­tive, since it’s so easy for novice artists to screw up when they have the image right in front of them.

There’s a more effec­tive method which uses a ruler or com­pass and is of­ten used to copy Bar­gue draw­ings. Use pre­cise mea­sure­ments around a line at the meri­dian and es­sen­tially con­nect the dots. For the cu­ri­ous:

This might work long dis­tance: “Okay, draw the next dot 9/​32nds of an inch a way at 12 de­grees down to the right.”

This still seems like a bit of a cop out, though. Yes, there are ways to as­sem­ble copies of images us­ing a grid, but it doesn’t help us figure out how such free­hand images were made in the first place. We’re not even tak­ing a crack at the lit­tle black box.

Draw­ing on the Right Side of the Brain seems to be the clas­sic for teach­ing peo­ple how to draw. It’s a bunch of meth­ods for see­ing the de­tails of what you’re see­ing (copy­ing a draw­ing held up­side down, draw­ing shad­ows rather than ob­jects) so that you draw what you see rather than a men­tal sim­plified hi­ero­glyphic of what you see.

This post is about the dis­tinc­tions be­tween Tra­di­tional and Bayesian Ra­tion­al­ity, speci­fi­cally the differ­ence be­tween re­fus­ing to hold a po­si­tion on an idea un­til a bur­den of proof is met ver­sus Bayesian up­dat­ing.

Good qual­ity gov­ern­ment policy is an im­por­tant is­sue to me (it’s my Some­thing to Pro­tect, or the clos­est I have to one), and I tend to ap­proach ra­tio­nal­ity from that per­spec­tive. This gives me a differ­ent per­spec­tive from many of my fel­low as­piring ra­tio­nal­ists here at Less Wrong.

There are two ma­jor episte­molog­i­cal challenges in policy ad­vice, in ad­di­tion to the nor­mal difficul­ties we all have to deal with:
1) Policy ques­tions fall al­most en­tirely within the so­cial sci­ences. That means the qual­ity of ev­i­dence is much lower than it is in the phys­i­cal sci­ences. Un­con­trol­led ob­ser­va­tions, analysed with statis­ti­cal tech­niques, are gen­er­ally the strongest pos­si­ble ev­i­dence, and some­times you have noth­ing but the­ory or pro­fes­sional in­stinct to work with.2) You have a very limited time in which to find an an­swer. Cabi­net Ministers of­ten want an an­swer within weeks, a timeframe mea­sured in months is lux­u­ri­ous. And of­ten a policy pro­posal is too sen­si­tive to dis­cuss with the gen­eral pub­lic, or some­times with any­one out­side your team.

By the stan­dards of Tra­di­tional Ra­tion­al­ity, policy ad­vice is of­ten made with­out meet­ing a bur­den of proof. Best guesses and the­o­ret­i­cal con­sid­er­a­tions are too weak to reach con­clu­sions. A proper prac­ti­tioner of Tra­di­tional Ra­tion­al­ity wouldn’t be able to make any kind of recom­men­da­tion, one could iden­tify some promis­ing ini­tial hy­pothe­ses, but that’s it.

But Just be­cause you didn’t have time to come up with a good an­swer doesn’t mean that Ministers don’t ex­pect an an­swer. And a prac­ti­tioner of Bayesian Ra­tion­al­ity always has a best guess as to what is true, even if the ev­i­dence base is non-ex­is­tent you can fall back on your prior. You don’t want to be over­con­fi­dent in stat­ing your po­si­tion, as­sump­tions must be out­lined and sen­si­tivi­ties should be ex­plored. But you still need to give an an­swer and that’s what at­tracts me to Bayesian ap­proaches: you don’t have to be offi­cially ag­nos­tic un­til be­ing pre­sented with a level of ev­i­dence that is un­re­al­is­ti­cally high for policy work.

It seems to me that if you have very good qual­ity ev­i­dence then Bayesian and Tra­di­tional Ra­tion­al­ity are very similar. Good ev­i­dence ei­ther proves or dis­proves a propo­si­tion for a Tra­di­tional Ra­tion­al­ist, and for a Bayesian Ra­tion­al­ist it will shift their prob­a­bil­ity es­ti­mate, as well as in­creas­ing their con­fi­dence a lot. The biggest differ­ence seem to me to be that Bayesian Ra­tion­al­ity seems is able to make use of weak ev­i­dence in a way Tra­di­tional Ra­tion­al­ity can’t.

Re­minded me of one of my fa­vorite movie di­alogues—from Sun­sh­ine. Con­text was ac­tu­ally physics, but the com­plex­ity of the situ­a­tion and the time frame but the char­ac­ters in the same situ­a­tion as you with the Cabi­net ministers.

Capa: It’s the prob­lem right there. Between the boost­ers and the grav­ity of the sun the ve­loc­ity of the pay­load will get so great that space and time will be­come smeared to­gether and ev­ery­thing will dis­tort. Every­thing will be un­quan­tifi­able.

Kaneda: You have to come down on one side or the other. I need a de­ci­sion.

Capa: It’s not a de­ci­sion, it’s a guess. It’s like flip­ping a coin and ask­ing me to de­cide whether it will be heads or tails.

Kaneda: And?

Capa: Heads… We har­vested all Earth’s re­sources to make this pay­load. This is hu­man­ity’s last chance… our last, best chance… Searle’s ar­gu­ment is sound. Two last chances are bet­ter than one.

Yes, that’s a good ex­am­ple. There are times when a de­ci­sion has to be made, and say­ing you don’t know isn’t very use­ful. Even if you have very lit­tle to go on, you still have to de­cide one way or the other.

I am not at all like you. I don’t have much in­ter­est in policy at all, and I do tend to re­fuse to hold a po­si­tion, be­ing very mind­ful of how easy it is to be com­pletely off course (Prob­a­bly from read­ing too much his­tory of sci­ence. It’s “the grave­yard of dead ideas”, af­ter all.). I’m likely to tell the Cabi­net Ministers to get off my back or they’ll have ab­solutely use­less recom­men­da­tions.

How­ever, I think you have hit upon the point that makes Bayesi­anism at­trac­tive to me: it’s ra­tio­nal­ity you can use to act in real-time, un­der un­cer­tainty, in nor­mal life. Tra­di­tional Ra­tion­al­ity is slow.

I see your point, the trou­ble is that a recom­men­da­tion that comes too late of­ten is ab­solutely use­less. A lot of policy is time-de­pen­dant, if you don’t act within a cer­tain time frame then you might a swell do noth­ing. While some­times do­ing noth­ing is the right thing to do, a late recom­men­da­tion is of­ten no bet­ter than no recom­men­da­tion.

Yeah, I for­got to add that you’ve budged me slightly from my staunch pos­i­tivist at­ti­tude for so­cial sci­ence. Thanks. Read­ing up on com­plex adap­tive sys­tems has made me just that much more skep­ti­cal about our abil­ity to pre­dict policy’s effects, and per­haps bi­ased me.

As it hap­pens, I’m pretty scep­ti­cal as to how much we can know as well. There’s noth­ing like do­ing policy to gain an un­der­stand­ing of how messy it can be. While the so­cial sci­ences have a less than won­der­ful record in de­vel­op­ing knowl­edge (look at the record of de­vel­op­ment eco­nomics, as one ex­am­ple), and eco­nomic fore­cast­ing is still not much bet­ter than voodoo but it’s not like there’s an­other group out there with all the an­swers. We don’t have all of the an­swers, or even most of them, but we’re bet­ter than noth­ing, which is the only al­ter­na­tive.

Noth­ing is of­ten a pretty good al­ter­na­tive. Govern­ment ac­tion always comes at a cost, even if only the dead­weight loss of tax­a­tion (keyphrase “pub­lic choice” for rea­sons you might ex­pect the cost to be higher than that).
I’m not try­ing to turn this into a poli­ti­cal de­bate, but you should con­sider do­ing noth­ing not nec­es­sar­ily a bad thing, and what you do not nec­es­sar­ily bet­ter.

When I said “bet­ter than noth­ing” I was refer­ring to ad­vice, not the ac­tual ac­tions taken. My back­ground is in eco­nomics so I’m quite fa­mil­iar with both dead-weight loss of tax­a­tion and pub­lic choice the­ory, though these days I lean more to­ward Bryan Ca­plan’s ra­tio­nal ir­ra­tional­ity the­ory of gov­ern­ment failure.

I agree that noth­ing is of­ten a good thing for gov­ern­ments to do, and in many cases that is the ad­vice that Cabi­net re­ceives.

Amaz­ingly, there re­ally are do­mains in which so­cial­ism ac­tu­ally works. In the first half of the nine­teenth cen­tury, the U.S. had pri­va­tized fire­fight­ing. It was hor­rible. After the Amer­i­can Civil War, fire­fight­ing was taken over by gov­ern­ments, and, as­tound­ingly enough, things ac­tu­ally got bet­ter!

Sim­ply re­spond­ing with a Ran­dian quote doesn’t show that gov­ern­ment doesn’t work. More­over, there are some things where gov­ern­ment has worked well. At the most ba­sic level, one needs gov­ern­ments to pro­tect prop­erty rights, with­out which mar­kets can’t func­tion. Similarly, var­i­ous forms of pooled goods are use­ful (you are wel­come to try to have roads run by pri­vate in­dus­try and see how well that works) But even be­yond that, gov­ern­ment poli­cies are helpful for deal­ing with nega­tive ex­ter­nal­ities. In par­tic­u­lar, some forms of harm are by na­ture spread out and not con­nected strongly to any sin­gle source. The clas­sic ex­am­ple is pol­lu­tion. Since pol­lu­tion is spread out, the trans­ac­tion cost is pro­hibitively high for any given in­di­vi­d­ual to try to re­duce pol­lu­tion lev­els they are sub­ject to. But a gov­ern­ment, us­ing reg­u­la­tion and care­ful tax­a­tion, can do this effi­ciently. In some situ­a­tions, this can even be done in con­junc­tion with mar­ket forces (such as cap and trade sys­tems). In the US, this was very suc­cess­ful in effi­ciently han­dling lev­els of sulfur diox­ide. See this pa­per. Govern­ments are of­ten slow and in­effi­cient. But to claim that well-thought out poli­cies never ex­ist? That’s sim­ply at odds with re­al­ity.

In par­tic­u­lar, some forms of harm are by na­ture spread out and not con­nected strongly to any sin­gle source. The clas­sic ex­am­ple is pol­lu­tion. Since pol­lu­tion is spread out, the trans­ac­tion cost is pro­hibitively high for any given in­di­vi­d­ual to try to re­duce pol­lu­tion lev­els they are sub­ject to. But a gov­ern­ment, us­ing reg­u­la­tion and care­ful tax­a­tion, can do this effi­ciently. In some situ­a­tions, this can even be done in con­junc­tion with mar­ket forces (such as cap and trade sys­tems). In the US, this was very suc­cess­ful in effi­ciently han­dling lev­els of sulfur diox­ide.

Even from a liber­tar­ian point of view, pol­lu­tion is some­thing that causes harm, like mur­der or theft. The gov­ern­ments job is to en­force laws that miti­gate sources of harm and, when pos­si­ble, cor­rect harms against in­di­vi­d­u­als. A per­son or cor­po­ra­tion who puts out some amount of pol­lu­tion should be forced to pay for any clean up or harm that they make.

If you drive a car, you em­mit­ted some frac­tion of the pol­lu­tion that caused tem­per­a­tures to go up, caused smog in­duced ill­ness and some other mis­cel­la­neous harms that cost some amount of money. If that amount of money was 40 billion dol­lars, and you con­tributed 1 billionth to­wards the harm, you sshould pay 40 dol­lars.

This should be even less con­tro­ver­sial than im­pris­on­ing murderers

I was also un­pleas­antly suprised to find that there was a group of peo­ple griping about pro­grams that would make it eas­ier to iden­tify cars that weren’t li­a­bil­ity-in­sured or pol­lu­tion-tested, and this was called a “liber­tar­ian” po­si­tion.

ETA: And liber­tar­ian-lean­ing aca­demics don’t seem to “get” why pay­ing pol­luters to go away isn’t a solu­tion, and don’t even un­der­stand what prob­lem is sup­posed to be solved, even when hy­po­thet­i­cally placed in such a situ­a­tion! (See the ex­change be­tween me and Han­son in the link.)

It’s not so much that it doesn’t solve the prob­lem as things just don’t work that way. For starters, cur­rent en­ergy dis­tri­bu­tion meth­ods are lo­cal mo­nop­o­lies, so they are strongly reg­u­lated on price be­cause the com­pe­ti­tion mechanism doesn’t work as it should. The idea that a cus­tomers might “choose” cleaner en­ergy doesn’t always work.

Se­cond, some log­ging com­pa­nies tried that. They had an out­side com­pany, come in, do an in­spec­tion, and cer­tify the ecolog­i­cal vi­a­bil­ity of their prac­tices. There were a fair num­ber of peo­ple who ac­tu­ally were will­ing to pay a lit­tle more. The prob­lem is, an­other set of com­pa­nies came by, in­spected and ap­proved them­selves (with a differ­ent la­bel that they in­vented) , and cus­tomers weren’t able to tell the differ­ence. That’s a prob­lem.

It’s not so much that it doesn’t solve the prob­lem as things just don’t work that way. For starters, cur­rent en­ergy dis­tri­bu­tion meth­ods are lo­cal mo­nop­o­lies, so they are strongly reg­u­lated on price be­cause the com­pe­ti­tion mechanism doesn’t work as it should. The idea that a cus­tomers might “choose” cleaner en­ergy doesn’t always work.

Also, to a great ex­tent, elec­tric­ity is fun­gible. Sup­pose you have both wind­mills and coal-fired plants con­nected to the same elec­tri­cal grid, and they both gen­er­ate equal amounts of power. Now sup­pose I tell the elec­tric com­pany that I only want to buy power from the wind­mills, so in­stead of get­ting half wind power and half coal power, I get 100% wind power (on pa­per). How­ever, the elec­tric com­pany doesn’t ac­tu­ally have to change the way it pro­duces elec­tric­ity in or­der to do this. All they have to do slightly in­crease the per­centage of coal power that they de­liver to ev­ery­one else (on pa­per). So all that changes is num­bers on pa­per, and there’s ex­actly as much coal power be­ing gen­er­ated as be­fore.

Your noise pol­lu­tion ex­am­ple is a po­ten­tially prob­le­matic one for liber­tar­i­ans but the ob­vi­ous an­swer that oc­curs to me is the one I would ex­pect many thought­ful liber­tar­i­ans to make. You are as­sum­ing a liber­tar­ian world with largely un­changed amounts of pub­lic space which is a prob­le­matic com­bi­na­tion. The space out­side your win­dow has no rea­son to be pub­lic space. You would see a lot more ‘gated com­mu­nity’ type ar­range­ments in a more liber­tar­ian so­ciety. Peo­ple with low noise tol­er­ance could choose to live in com­mu­ni­ties where the ‘pub­lic’ space was owned by a mu­ni­ci­pal ser­vice provider with strict rules about noise pol­lu­tion. Any­one not ad­her­ing to these rules could be ejected from the prop­erty.

Many com­mon prob­lems with imag­ined liber­tar­ian so­cieties dis­solve when you al­low for much greater pri­vate own­er­ship of cur­rently pub­lic land than cur­rently ex­ists.

It’s eas­ier to move out? You are not born un­der a land­lord. You do not swear fealty to the flag of the land­lord. No­body thinks the land­lord should be able to draft you for civil ser­vice. The land­lord can­not put you in jail for failing to pay rent. There’s a long, long list of other differ­ences where the land­lord as gov­ern­ment anal­ogy breaks down. I’m sur­prised any­one still brings it up.

EDIT: Ha. You changed it. In re­al­ity, not nec­es­sar­ily that much, al­though it’s nice to have ex­tra gov­ern­men­tal agency that you can choose to pay or not, and that is ac­countable to the gov­ern­ment in a trans­par­ent way. Ask­ing the gov­ern­ment to reg­u­late it­self is al­most as dumb as ask­ing a log­ging com­pany to reg­u­late it­self.

Well, If you ex­pect a land­lord to perform the func­tions of a gov­ern­ment, by, say, reg­u­lat­ing noise lev­els for the benefit of ten­ants, then doesn’t the anal­ogy hold in this par­tic­u­lar case? If reg­u­la­tion is bad, does it mat­ter if it’s reg­u­la­tion by land­lord or reg­u­la­tion by city coun­cil?

If a land­lord tries to have you evicted, and you re­fuse to leave when a court rules that you must do so, lo­cal law en­force­ment is al­lowed to phys­i­cally re­move you from the prop­erty. That doesn’t sound non-vi­o­lent to me.

This is a fair point. I would note, how­ever, that evic­tion typ­i­cally re­quires re­peated no­tifi­ca­tion, and op­por­tu­ni­ties for you mod­ify your be­hav­ior be­fore en­coun­ter­ing vi­o­lence.

Con­trast with how your lo­cal sher­iff can bust down your door in the mid­dle of the night, shoot your dogs, de­stroy your prop­erty, and ar­rest you merely for sus­pect­ing you of pos­sess­ing mar­ijuana. And then be praised for it even if you are in­no­cent.

Mu­ni­ci­pal ser­vices are gen­er­ally pro­vided by a lo­cal gov­ern­ment but this is largely an ar­ti­fact of the way mod­ern democ­ra­cies are or­ga­nized. Pri­vate ar­range­ments are fairly rare in the mod­ern world but cruise ships, pri­vate re­sorts, cor­po­rate cam­puses and on a smaller scale large man­aged apart­ment build­ings provide ex­am­ples of de­cou­pling the idea of pro­vi­sion of mu­ni­ci­pal ser­vices and gov­ern­ment.

What if you had a dozen differ­ent com­pa­nies that pro­vided ser­vices like that. They would have a monopoly in differ­ent ar­eas, how­ever, the lo­cal gov­ern­ments would still be able to choose which one they wanted, and at any time they were dis­pleased they could switch. Ac­tu­ally, this is a good idea!

You can prob­a­bly go fur­ther than that. Mu­ni­ci­pal ser­vices can be un­bun­dled and can op­er­ate with­out a ge­o­graph­i­cal monopoly. This is already widely done for ca­ble and tele­coms in the US and UK and for elec­tric­ity and gas in the UK. Some coun­tries do it for wa­ter and san­i­ta­tion ser­vices. There are ex­am­ples wor­ld­wide of it be­ing done for trans­porta­tion, re­fuse col­lec­tion, health and ed­u­ca­tion. Ar­gu­ments that such ser­vices are a ‘nat­u­ral monopoly’ are usu­ally pro­moted most strongly by those who wish to op­er­ate that monopoly with gov­ern­ment pro­tec­tion.

If the “mu­ni­ci­pal ser­vice provider” has the power to en­force its edicts on noise level (be­cause it has the power to ex­ile those who vi­o­late them), then doesn’t that mean that it has ex­actly the same power over noise that a gov­ern­ment would—and the same po­ten­tial to mi­suse that power?

I tend to think that the right of exit is the ul­ti­mate and fun­da­men­tal check on such abuses of power. This is why I favour de­cen­tral­iza­tion /​ fed­er­al­iza­tion /​ de­volu­tion as im­prove­ments to the sta­tus quo of in­creas­ing cen­tral­iza­tion of poli­ti­cal power. I think that on more or less ev­ery level of gov­ern­ment we would benefit from de­cen­tral­iza­tion of power. City-wide by­laws on noise pol­lu­tion are too coarse-grained for ex­am­ple. An en­ter­tain­ment dis­trict or an area pop­u­lar with stu­dents should have differ­ent stan­dards than a res­i­den­tial area with many work­ing fam­i­lies. Zon­ing rules are an at­tempt to make such al­lowances but I think pri­vate solu­tions are likely to work bet­ter. I’d at least like to see them tried so we can start to see what works.

Hitler was kind to an­i­mals. Even ac­cept­ing your du­bi­ous claim it is not enough to show that gov­ern­ment some­times achieves pos­i­tive out­comes (and don’t for­get to ask what crite­ria are be­ing used to de­ter­mine ‘pos­i­tive’). The rele­vant ques­tion is whether gov­ern­ment in­ter­ven­tion pro­duces an over­all net benefit. Gen­er­ally it seems you can make the strongest case for this in small, rel­a­tively ho­mo­ge­neous coun­tries. Th­ese re­sults do not nec­es­sar­ily scale.

you are wel­come to try to have roads run by pri­vate in­dus­try and see how well that works

There are an awful lot of hid­den as­sump­tions in this state­ment.

But a gov­ern­ment, us­ing reg­u­la­tion and care­ful tax­a­tion, can do this effi­ciently.

Can in the­ory and ever ac­tu­ally do in prac­tice are wor­lds apart. Nega­tive ex­ter­nal­ities are one of the stronger eco­nomic ar­gu­ments for gov­ern­ment in­ter­ven­tion but ac­tual ex­am­ples of gov­ern­ment reg­u­la­tion rarely ap­prox­i­mate the the­o­ret­i­cal reg­u­la­tory frame­work pro­posed by economists. This is largely be­cause the be­havi­our of gov­ern­ments is de­ter­mined pri­mar­ily by pub­lic choice the­ory and not by the benev­olent, en­light­ened pur­suit of eco­nomic ra­tio­nal­ity.

I agree with most of what you said. That’s one of the rea­sons I gave the his­tor­i­cal ex­am­ple of SO2. The claim be­ing made by the per­son I was re­spond­ing to was not a re­mark about net gain but the claim that re­gard­ing “Good qual­ity gov­ern­ment policy” that “There is no more ev­i­dence for that than there is for God” and then back­ing it up with an ar­gu­ment from ir­rele­vant au­thor­ity. So giv­ing ex­am­ples to show that’s not the case ac­com­plishes the ba­sic goal.

There’s a pretty good prece­dent for this hap­pen­ing in the form of the railway sys­tem in early Amer­ica. I think I’d clas­sify it as a mar­ket failure as pri­vate roads and railways have a way of be­com­ing lo­cal mo­nop­o­lies and hav­ing an enor­mous ad­van­tage when it comes to rent-seek­ing be­hav­ior.

It’s not that it’s im­pos­si­ble, I just don’t think it’s a very good idea.

One of the hid­den as­sump­tions I was think­ing of is the as­sump­tion that gov­ern­ment built roads have been a net benefit for Amer­ica. The high­way sys­tem has been a large im­plicit sub­sidy for all kinds of busi­ness mod­els and lifestyle choices that are not ob­vi­ously op­ti­mal. Amer­ica’s de­pen­dence on oil and out­size en­ergy de­mands are in large part a func­tion of the in­cen­tives cre­ated by huge gov­ern­ment ex­pen­di­ture on high­ways. Subur­ban sprawl, McMan­sions, re­tail parks and long com­mutes are all un­in­tended con­se­quences of the im­plicit sub­sidies in­her­ent in large scale gov­ern­ment road con­struc­tion.

Amer­i­can cul­ture and so­ciety would prob­a­bly look quite differ­ent with­out a his­tory of gov­ern­ment road con­struc­tion. It’s not ob­vi­ous to me that it would not look bet­ter by many mea­sures.

Not nec­es­sar­ily. If you’ve ever been to Dis­ney World, it’s not like that. And hell, gov­ern­ment roads in the states and Ja­pan of­ten dis­solve into a com­plex and in­effi­cient se­ries of toll roads, at least in some ar­eas.

I’m much more wor­ried about un­com­pet­i­tive prac­tices, like pow­er­ful lo­cal mo­nop­o­lies and rent seek­ing be­hav­ior.

Dis­ney world owns the land, they can do what­ever they want. But here in or­der to make effi­cient roads, we have to use em­i­nent do­main. A pri­vate com­pany wouldn’t be able to do that. In or­der to have a gov­ern­mentless so­ciety, you have to a) cre­ate a nearly im­pos­si­ble to main­tain sys­tem of to­tal an­ar­chy like ex­ists in parts of Afghanistan to­day or b) cre­ate a very cor­rupt and bro­ken so­ciety ruled by pri­vate cor­po­ra­tions, which is es­sen­tially a gov­ern­ment any­ways.

The gov­ern­ment does use pri­vate con­tracters in many cases for differ­ent pro­jects. It might work on roads, I’m not sure if they already use it, but its still alot differ­net from ask­ing a pri­vate cor­po­ra­tion to de­cide when and where to build roads.

They do. And pri­vate cor­po­ra­tions or coun­cils already de­cide where to build the roads for some things, it’s just that all of those things only work if they’re already con­nected to other in­fras­truc­ture, which, in the US, means pub­lic fed­eral, state and lo­cally built roads.

Well, I think you aren’t re­ally imag­i­na­tive enough in your view of an­ar­chy, but… I’m not an an­ar­chist and I’m not go­ing to defend an­ar­chy.

I dis­agree with the idea that effi­cient roads re­quire im­mi­nent do­main. It’s not even hard to prove. All I have to do is give one ex­am­ple of a busi­ness that was made with­out im­mi­nent do­main. The railroad sys­tem, which I brought up be­fore.

I still mostly think a na­tion of pri­vate roads is a bad idea, since it’s hard to imag­ine a way or sce­nario in which they wouldn’t be a lo­cal monopoly.

Which is part of the rea­son I think it’s a bad idea. The railroads con­stantly pe­ti­tioned for those rights, that money and es­sen­tially leached off the Amer­i­can peo­ple. That’s what rent seek­ing means.

Are railroads that good an ex­am­ple? Some railroads and sub­ways were built us­ing em­i­nent do­main al­though I don’t know how much. And many of the large railroads built in the US in the sec­ond half of the 20th cen­tury went through land that did not have any pri­vate own­er­ship but was given to the railroads by the gov­ern­ment.

Railroads are a good ex­am­ple of a bad idea. The rea­son I picked them is that they were ter­rible, if I was go­ing to pick in­no­va­tive and cre­ative real es­tate pur­chases by pri­vate in­dus­try, I’d be talk­ing about McDon­alds or Star­bucks.

Railroads weren’t a ter­rible idea. The canal sys­tem was a ter­rible idea, not railroads. Railroads cre­ated lots of in­dus­try that wouldn’t have been pos­si­ble with­out them. Many 19th cen­tury lead­ers thought of them as the best thing that ever hap­pened to Amer­ica.

The sys­tem of canals built in the early 19th cen­tury in the United States al­lowed the set­tle­ment of the old west and the de­vel­op­ment of in­dus­try in the north east (by al­low­ing grain from west­ern farms to reach the east). Why do you con­sider them a ter­rible idea? They were one of the cen­ter­pieces of the Amer­i­can Sys­tem, which was largely suc­cess­ful.

Be­cause they would dump the waste off the left side of the boat, and get drink­ing wa­ter from the right. The ac­tual sides would switch de­pend­ing on wich way they were go­ing. I’ve been on those canal boats be­fore, they are very, very slow. They had or­phans walk on the side of the boat and guide the don­key (ass) that pul­led it. They also took a long time to build, and didn’t last that long.

Be­cause they would dump the waste off the left side of the boat, and get drink­ing wa­ter from the right.

This was a gen­eral prob­lem more con­nected to clean­li­ness as a whole in 19th cen­tury Amer­ica. Read a his­tory of old New York, and re­al­ize that it took mul­ti­ple plagues be­fore they even started dis­cussing not hav­ing live­stock roam­ing the city.

I’ve been on those canal boats be­fore, they are very, very slow.

Of course they were slow. They were an effi­cient method of mov­ing a lot of cargo. Each boat moved slowly, but the to­tal cargo moved was a lot more than they could of­ten be moved by other means. Think of it as high la­tency and high band­with.

They had or­phans walk on the side of the boat and guide the don­key (ass) that pul­led it.

In gen­eral 19th cen­tury at­ti­tudes to­wards child la­bor weren’t great. But what does this have to do with the canal sys­tem it­self? Com­pared to many jobs they could have, this would have been a pretty good one. And this isn’t at all con­nected to us­ing or­phans; it isn’t like the canals were Pow­ered by the souls of for­saken chil­dren. They were sim­ply the form of cheap la­bor used dur­ing that time pe­riod for many pur­poses.

They also took a long time to build, and didn’t last that long.

The first point isn’t rele­vant un­less you are try­ing to make a de­tailed eco­nomic es­ti­mate of whether they paid for them­selves. The sec­ond is sim­ply be­cause they weren’t main­tained af­ter a few years once many of them were made ob­so­lete by rail lines. If the rails had not come in, the canals would have lasted much longer.

So they’re a ter­rible idea be­cause of bad san­i­ta­tion and child la­bor? In that case, the en­tire his­tory of eco­nomic ideas is bad up un­til 1920-ish. They un­ques­tion­ably achieved their goal of pro­vid­ing bet­ter trans­porta­tion. Am I to in­fer that you be­lieve that gov­ern­ment run high­ways are wrong be­cause there is trash strewn on the sides of the road?

Maybe but thats not the point. They might have worked, maybe even made a profit, but I still say that they were in­effi­cient which is why we don’t use them to­day (all thats left is a few large pieces of stone jut­ting out of rivers that passers by can’t ex­plain.)

That’s in­ter­est­ing. I wouldn’t ex­pect there to be many ex­am­ples of work­ing pri­va­tized roads and their effects on a na­tion­wide scale, but if there were, I’d love to see more about them, or even a good pa­per based on a hy­po­thet­i­cal.

I think you’re stuck in the mind­set of ‘if it wasn’t for our gov­ern­ment pro­vided roads where would we drive our cars?’. Such a world would prob­a­bly have fewer pri­vate cars and be ar­ranged in such a way that many or­di­nary peo­ple could get by perfectly well with­out a car, as is the case in many Euro­pean and Ja­panese cities.

This ar­ti­cle might help you un­der­stand some of the hid­den as­sump­tions many Amer­i­cans op­er­ate un­der. Note: this guy has some rather wacky ideas but his ar­ti­cles on ‘tra­di­tional cities’ are pretty in­ter­est­ing.

I strongly agree with you that the US fed­eral gov­ern­ment has spent too much on road sub­sidies over the years and should de­crease its cur­rent spend­ing.

That said, not ev­ery­where is Juneau, Alaska; not all sites con­nected to gov­ern­ment roads are a “Subur­ban Hell,” and not all in­hab­itants of the sub­urbs would pre­fer to live in a “Tra­di­tional City.” Roads are use­ful for ac­com­mo­dat­ing a highly mo­bile, atom­istic so­ciety that ex­ploits new re­sources and adopts new lo­cal trade routes ev­ery 20 years or so. Cars and park­ing lots are use­ful for sep­a­rat­ing peo­ple who have re­cently im­mi­grated from all differ­ent places and who re­ally don’t like each other and don’t want to have much to do with each other. In­ter­state high­ways were built for evac­u­a­tion and civil defense as well as for ac­tual trans­port. Fi­nally, re­gard­less of whether you pre­fer roads or trains, some level of gov­ern­ment sub­sidy and/​or co­or­di­na­tion is prob­a­bly needed to get the most effi­cient trans­porta­tion sys­tem pos­si­ble.

In any case, this thread started out as a dis­cus­sion of Tra­di­tional vs. Bayesian ra­tio­nal­ity, did it not? Im­prov­ing gov­ern­ment policy was merely the ex­am­ple cho­sen to illus­trate a point. It seems un­sports­man­like to shoot that point down on the grounds that vir­tu­ally all gov­ern­ment does more harm than good. Even if such a claim were true, one might still want to know how to gen­er­ate gov­ern­ment poli­cies that do rel­a­tively less harm, given a set of poli­ti­cal con­straints that tem­porar­ily pre­vent en­act­ing a strong ver­sion of (an­ar­cho)liber­tar­i­anism.

Even if such a claim were true, one might still want to know how to gen­er­ate gov­ern­ment poli­cies that do rel­a­tively less harm

The failure of gov­ern­ment is not a prob­lem of not know­ing which gov­ern­ment poli­cies would do rel­a­tively less harm. The pri­mary prob­lem of gov­ern­ment is that there is lit­tle in­cen­tive to im­ple­ment such poli­cies. Try­ing to im­prove gov­ern­ment by work­ing to figure out bet­ter poli­cies is like try­ing to avoid be­ing eaten by a lion by mak­ing a sound log­i­cal ar­gu­ment for the ethics of veg­e­tar­i­anism. The lion has no more in­ter­est in the finer points of ethics than a poli­ti­cian does in the effects of policy on any­thing other than his own self-in­ter­est.

I men­tioned el­se­where that gov­ern­ments of rel­a­tively small states with rel­a­tively ho­mo­ge­neous pop­u­la­tions seem to do bet­ter than av­er­age. Scal­ing these rel­a­tive suc­cesses up ap­pears prob­le­matic.

If small ho­mo­ge­neous states do best, then cam­paign­ing for de­volu­tion to the best available ap­prox­i­ma­tion of such might be the best move.

Yes, that or seast­eading. I’m also a firm be­liever in the ‘vot­ing with your feet’ ap­proach to cam­paign­ing. I have no de­sire to wait around un­til a demo­cratic ma­jor­ity are con­vinced for im­prove­ments to hap­pen lo­cally. Mi­gra­tion is one of the few com­pet­i­tive pres­sures on gov­ern­ments to­day.

Your link pro­vides very lit­tle ev­i­dence for your claim. At the na­tional level, to say that a pro­gram costs $1 mil­lion per year is unim­pres­sive. Sup­pose, for the sake of ar­gu­ment, that the mul­ti­plier effect for mo­hair pro­duc­tion is quite low, say, 0.5. I sus­pect that is it rather higher than that, since mul­ti­ple peo­ple will go and card and weave and spin the damn fibers and then sell them to each other at art fairs, but let’s say it’s 0.5. That means you’re wast­ing $500,000 a year. In the con­text of a $5 trillion an­nual bud­get, you’re look­ing at 1 part per 10 mil­lion, or an 0.00001% in­crease in effi­ciency. Why should one of our 545 elected rep­re­sen­ta­tives, or even one of their 20,000 staffers, make this a pri­or­ity to elimi­nate? The amaz­ing thing is that the sub­sidy was elimi­nated at all, not that it crept back in. All sys­tems have some de­gree of par­a­sitism, ‘rent’, or waste. This is not ex­actly low-hang­ing fruit we’re talk­ing about here.

More gen­er­ally, I have worked for a few differ­ent poli­ti­ci­ans, and so far as I could tell, most of them mostly cared about figur­ing out bet­ter poli­cies sub­ject to main­tain­ing a high prob­a­bil­ity of be­ing re-elected. None of them ap­peared to have the slight­est in­ter­est in di­rectly prof­it­ing from their work as pub­lic ser­vants, nor in ex­ploit­ing their po­si­tions for fame, sex, etc. Those are just the cases that make the news. In my opinion, based on a mod­er­ate level of per­sonal ex­pe­rience, the as­sump­tion that poli­ti­ci­ans are pri­mar­ily mo­ti­vated by self-in­ter­est at the mar­gin in equil­ibrium is sim­ply false.

What did you take my claim to be? The ex­am­ple in the link is in­tended to illus­trate the fact that the prob­lem of poli­tics is not one of figur­ing out bet­ter policy. It is an ex­am­ple of a policy that is uni­ver­sally agreed to be bad and yet has per­sisted for over 60 years, de­spite a brief pe­riod in which it was tem­porar­ily stamped out. The mag­ni­tude of the sub­sidy in this case may be small but there are many thou­sands of such bad poli­cies, some of much greater in­di­vi­d­ual mag­ni­tude, and they add up. The ex­am­ple is in­ten­tion­ally a small and un-con­tro­ver­sial ex­am­ple since it is in­tended to illus­trate that if even minor bad poli­cies like this are hard to kill then vastly larger ones are un­likely to be elimi­nated with­out struc­tural re­form.

None of them ap­peared to have the slight­est in­ter­est in di­rectly prof­it­ing from their work as pub­lic ser­vants, nor in ex­ploit­ing their po­si­tions for fame, sex, etc.

Giv­ing this ap­pear­ance is fairly im­por­tant to suc­ceed­ing as a poli­ti­cian so this is not in­dica­tive of much. I find it more rele­vant to judge by ac­tual ac­tions and re­sults pro­duced rather than by words or care­fully cul­ti­vated ap­pear­ances.

In my opinion, based on a mod­er­ate level of per­sonal ex­pe­rience, the as­sump­tion that poli­ti­ci­ans are pri­mar­ily mo­ti­vated by self-in­ter­est at the mar­gin in equil­ibrium is sim­ply false.

As a well known poli­ti­cian once noted, you can fool some of the peo­ple all of the time.

As a well known poli­ti­cian once noted, you can fool some of the peo­ple all of the time.

In­deed you can! Be aware, though, that memes about gov­ern­ment cor­rup­tion and the peo­ple who ped­dle them may have just as much power to fool you as the ‘offi­cial’ au­thor­i­ties. Hol­ly­wood, for ex­am­ple, has a much larger pro­pa­ganda bud­get than the US Congress. When’s the last time a Hol­ly­wood movie show­cased vir­tu­ous poli­ti­ci­ans?

Also, be­ware of in­su­lated ar­gu­ments. If you as­sume that (a) poli­ti­ci­ans are amaz­ingly good at dis­guis­ing their mo­tives, and (b) that poli­ti­ci­ans do in fact rou­tinely dis­guise their mo­tives, your as­ser­tions are em­piri­cally un­falsifi­able. If you dis­agree, con­sider this: what could a poli­ti­cian do to con­vince you that he was hon­estly mo­ti­vated by some­thing like al­tru­ism?

When’s the last time a Hol­ly­wood movie show­cased vir­tu­ous poli­ti­ci­ans?

An In­con­ve­nient Truth? Se­ri­ously though, I don’t think Hol­ly­wood is par­tic­u­larly tough on poli­ti­ci­ans. It’s a ma­jor en­abler for the cult of the pres­i­dency with heroic pres­i­dents sav­ing the world from aliens,as­ter­oids and ter­ror­ists. Evil cor­po­ra­tions and busi­ness­men get a far worse rap. The main­stream me­dia is much too soft on poli­ti­ci­ans in the US in my opinion as well. Where’s the US Pax­man?

If you dis­agree, con­sider this: what could a poli­ti­cian do to con­vince you that he was hon­estly mo­ti­vated by some­thing like al­tru­ism?

I think some poli­ti­ci­ans ac­tu­ally be­lieve that they are act­ing for the ‘greater good’. Some­times when they lobby for spe­cial in­ter­ests they re­ally con­vince them­selves they are do­ing a good thing. It is some­times eas­ier to con­vince oth­ers when you be­lieve your own spiel—this is well known in sales. They surely of­ten think they are sav­ing oth­ers from them­selves by re­strict­ing their liber­ties and tram­pling on their rights. Ul­ti­mately what they re­ally be­lieve is some­what ir­rele­vant. I judge them by how they re­spond to in­cen­tives, whose in­ter­ests they ac­tu­ally pro­mote and what re­sults they achieve.

I don’t think be­ing mo­ti­vated by al­tru­ism is de­sir­able and I don’t think pure al­tru­ism ex­ists to any sig­nifi­cant de­gree.

I agree with you that Hol­ly­wood is soft on Pres­i­dents, and that the main­stream me­dia is soft on just about ev­ery­one, with the pos­si­ble ex­cep­tion of peo­ple who might be rob­bing a con­ve­nience store and/​or sel­l­ing mar­ijuana in your neigh­bor­hood, de­tails at eleven.

Some­times when they lobby for spe­cial in­ter­ests they re­ally con­vince them­selves they are do­ing a good thing.

From my end, it still looks like you’re start­ing with the be­lief that gov­ern­ment is wrong, and de­duc­ing that poli­ti­ci­ans must be do­ing harm. Your ar­gu­ments are so­phis­ti­cated enough that I’m as­sum­ing you’ve read most of the se­quences already, but you might want to re­view The Bot­tom Line.

I’m not sure to what ex­tent ei­ther of us has an open mind about our fun­da­men­tal poli­ti­cal as­sump­tions. I’m also un­sure as to whether the LW com­mu­nity has any in­ter­est in read­ing a sus­tained duel about ab­stract ver­sions of an­ar­choliber­tar­i­anism and rep­re­sen­ta­tive democ­racy. Worse, I at least sym­pa­thize with some of your ar­gu­ments; my main com­plaint is that you phrase them too strongly, too gen­er­ally, and with too much cer­tainty. For all those rea­sons, I’m not go­ing to post on this par­tic­u­lar thread in pub­lic for a few weeks. I will read and pon­der one more pub­lic post on this thread by you, if any—I try to let op­po­nents get in the last word when­ever I move the pre­vi­ous ques­tion.

All that said, if you’d like to talk poli­tics for a while, you’re more than wel­come to pri­vate mes­sage me. You seem like a thought­ful per­son.

I’m not sure to what ex­tent ei­ther of us has an open mind about our fun­da­men­tal poli­ti­cal as­sump­tions.

I de­scribed my­self as a so­cial­ist 10 years ago when I was at uni­ver­sity. My par­ents are lifelong Labour) vot­ers. I have changed my poli­ti­cal views over time which gives me some con­fi­dence that I am open minded in my fun­da­men­tal poli­ti­cal as­sump­tions. Caveats are that my big 5 per­son­al­ity fac­tors are cor­re­lated with liber­tar­ian poli­tics (sug­gest­ing I may be biolog­i­cally hard­wired to think that way) and from some per­spec­tives I could be seen as fol­low­ing the cliched route of mov­ing to the right in my poli­ti­cal views as I get older.

my main com­plaint is that you phrase them too strongly, too gen­er­ally, and with too much cer­tainty.

This is partly a stylis­tic thing—I feel that padding com­ments with dis­claimers tends to de­tract from read­abil­ity and dis­tracts from the main point. I try to avoid say­ing things like in my opinion (should be ob­vi­ous given I’m writ­ing it) or vari­a­tions on the theme of the bal­ance of ev­i­dence leads me to con­clude (where else would con­clu­sions de­rive from) or mak­ing com­ments merely to re­mind read­ers that 0 and 1 are not prob­a­bil­ities (here of all places I hope that this goes with­out say­ing). I used to make heavy use of such caveats but I think they tend to in­crease ver­biage with­out adding much in­for­ma­tion. If it helps, imag­ine that I’ve added all these dis­claimers to any­thing I say as a foot­note.

All that said, if you’d like to talk poli­tics for a while, you’re more than wel­come to pri­vate mes­sage me. You seem like a thought­ful per­son.

I tend to sub­scribe to the idea that the best hope for im­prov­ing poli­tics is to change in­cen­tives, not minds but pe­ri­od­i­cally I get drawn into poli­ti­cal de­bates de­spite my­self. I’ll try to leave the topic for a while.

tend to sub­scribe to the idea that the best hope for im­prov­ing poli­tics is to change in­cen­tives, not minds but pe­ri­od­i­cally I get drawn into poli­ti­cal de­bates de­spite my­self. I’ll try to leave the topic for a while.

In­cen­tives (or in­cen­tive struc­tures, like mar­kets [1]) are the re­sult of hu­man de­ci­sions.

Per­haps you mean chang­ing the minds of the peo­ple who set the in­cen­tives.

[1] A mar­ket’s in­cen­tives aren’t set in de­tail, but per­mit­ting the mar­ket to op­er­ate in pub­lic or not is the re­sult of a rel­a­tively small num­ber of de­ci­sions.

Seast­eading is ex­plic­itly de­signed to cre­ate al­ter­na­tive so­cial sys­tems that op­er­ate some­what out­side the bound­aries of ex­ist­ing states. An anal­ogy is try­ing to in­tro­duce rev­olu­tion­ary tech­nolo­gies by con­vinc­ing a demo­cratic ma­jor­ity to vote for your idea vs. found­ing a startup and tak­ing the ‘if you build it they will come’ route. The lat­ter ap­proach gen­er­ally ap­pears to have a bet­ter track record.

Char­ter cities were born out of a slightly differ­ent agenda but em­body similar prin­ci­ples.

A sim­ple step that in­di­vi­d­u­als can take is to move to a ju­ris­dic­tion in line with their val­ues rather than try­ing to change their cur­rent ju­ris­dic­tion through the poli­ti­cal pro­cess. Com­pe­ti­tion works to im­prove prod­ucts in or­di­nary mar­kets be­cause cus­tomers take their busi­ness to the com­pa­nies that best satisfy their prefer­ences. Mi­gra­tion is one of the few forces that ap­plies some level of com­pet­i­tive pres­sure to gov­ern­ments.

Other po­ten­tial ap­proaches are to sup­port se­ces­sion or de­volu­tion move­ments, things like the free state pro­ject, sup­port­ing the sovereignty of tax havens, ‘starv­ing the beast’ by struc­tur­ing your af­fairs to min­i­mize the amount of tax you pay, per­sonal offshoring and other di­rect in­di­vi­d­ual ac­tion that cre­ates com­pet­i­tive pres­sure on ju­ris­dic­tions.

I think he’s talk­ing from a gov­ern­ment per­spec­tive or a per­spec­tive of power.

Ob­vi­ously, you can ed­u­cate peo­ple yjat malaria is bad and beg peo­ple to solve the prob­lem of malaria. It is, how­ever, pos­si­ble to know a lot about and not do any­thing about it.

Or you could pay peo­ple a lot of money if they would show work that might help the prob­lem of malaria. I tend to think this method would be more effec­tive, al­though there are other effec­tive in­cen­tives than money.

In my opinion, based on a mod­er­ate level of per­sonal ex­pe­rience, the as­sump­tion that poli­ti­ci­ans are pri­mar­ily mo­ti­vated by self-in­ter­est at the mar­gin in equil­ibrium is sim­ply false.

As a well known poli­ti­cian once noted, you can fool some of the peo­ple all of the time.

It would ei­ther be po­lite or im­po­lite to make ex­plicit who the “some of the peo­ple” are that you re­fer to in this sen­tence, and what rele­vance this has to Mass_Driver’s re­mark. I am cu­ri­ous to hear which.

Mass_Driver ap­pears to be one of the peo­ple who can be fooled all of the time since he judges poli­ti­ci­ans by what they say and how they pre­sent them­selves rather than by what their ac­tions say about their in­cen­tives and mo­ti­va­tions. I did not in­tend to be am­bigu­ous.

Thank you—I had sus­pected that might be your mean­ing, but I pre­fer not to pro­nounce nega­tive judg­ments on peo­ple with­out clear cause, and I have read plenty of com­ments which ap­peared equally damn­ing but were of an in­no­cent na­ture upon elab­o­ra­tion. Carry on.

I ap­pre­ci­ate your un­usu­ally deft grasp of the English lan­guage. Upvoted.

(I also ap­pre­ci­ate the paucity of my ed­u­ca­tion in the so­ciol­ogy of rep­re­sen­ta­tive gov­ern­ment, and must there­fore bow out of the dis­cus­sion. Please dis­count my opinion ap­pro­pri­ately.)

Wow. That’s re­ally very eye-open­ing. And as some­one who has spent time in old cities out­side the US and doesn’t even drive, I’m a bit shocked about how much of an as­sump­tion I seem to be op­er­at­ing with about what a city should look like.

Ja­panese cities still have mas­sive in­fras­truc­ture and pub­lic trans­porta­tion sub­sidies. It’s not OMG how can we not have cars?; it’s OMG how can we ac­tu­ally have trans­porta­tion in a non gov­ern­men­tal way that ac­tu­ally op­er­ates in a healthy mar­ket?

City scale trans­porta­tion in­fras­truc­ture doesn’t re­quire large amounts of gov­ern­men­tal in­volve­ment. Tra­di­tional Euro­pean cities evolved for much of their his­tory with min­i­mal gov­ern­ment in­volve­ment. City level in­fras­truc­ture would be well within the ca­pa­bil­ities of pri­vate en­ter­prise in a world with more pri­vate own­er­ship of pub­lic space. Large pri­vately con­structed re­sorts (think Dis­ney­land) illus­trate the fea­si­bil­ity of the con­cept al­though they are not nec­es­sar­ily great ad­verts for its de­sir­a­bil­ity.

That site you linked to has an ar­ti­cle com­par­ing Toledo, Ohio to Toledo, Spain. Its kind of un­fair be­cause Toledo Ohio is a rel­a­tivley small city and is dy­ing eco­nom­i­cally. I was kind of offended be­cause I live re­ally close to there, but he does make a point.

Huh. Well Toledo just seems like a crap­hole. Well once they get around to de­mol­ish­ing all of those old build­ings it will look bet­ter. And I can’t ex­plain how peo­ple live with­out cars. It bog­gles me. Sure we have big roads, but se­ri­ously, who wants to walk for 20 miles ev­ery day?

And I can’t ex­plain how peo­ple live with­out cars. It bog­gles me. Sure we have big roads, but se­ri­ously, who wants to walk for 20 miles ev­ery day?

The point made in the dis­cus­sion of tra­di­tional cities I linked is that liv­ing with­out a car can be a night­mare in places that were de­signed around cars but that many cities that were not de­signed around cars are very liv­able with­out them. I’ve lived in Van­cou­ver for 7 years with­out a car quite hap­pily and it’s not even par­tic­u­larly pedes­trian friendly com­pared to many Euro­pean cities (though it is by North Amer­i­can stan­dards). I only walk about 3-4 miles a day.

I live in the mid­dle of nowhere North west Ohio ac­tu­ally. I don’t ex­actly con­sider it “the coun­try”, but it is com­pared to other places I’ve been. The roads make 1 mile grids and each has a dozen houses on it and a few fields and woods. Walk­ing to town would take the bet­ter part of a day. Also, why are many mod­ern cities built in the 18th cen­tury de­signed around cars if they only were in­vented in the later half of the cen­tury and be­came pop­u­lar nearly half a cen­tury af­ter that?

Ok. It looks like some­one just did a driveby and down­voted ev­ery sin­gle en­try in this sub­thread by 1 (I no­ticed be­cause I saw my karma drop by 13 points with about 5 minute span since my last click on a LW page, and then glanc­ing through saw that a lot of en­tries in this thread (in­clud­ing many that are not mine) had a lower karma than they had been when I last looked at the thread this morn­ing, with many com­ments at 0 now at −1). Can the per­son who did this please ex­plain their logic?

When it comes to gov­ern­ment policy I tend to grade on a curve. I ac­tu­ally agree with you that the qual­ity of gov­ern­ment policy is gen­er­ally quite poor. But it’s not equally poor ev­ery­where, and im­prov­ing gov­ern­ment’s func­tion (which will in some cases mean­ing hav­ing it do less) can do a lot of good for a lot of peo­ple.

I should also point out that choos­ing to take no ac­tion is still a policy de­ci­sion. To give you an ex­am­ple, a few years a go some crazy woman pul­led a knife on a plane, lead­ing to a bit of an in­ci­dent. There was a re­view of air­line se­cu­rity reg­u­la­tion for do­mes­tic flights (which usu­ally have no searches or metal de­tec­tors in my coun­try). Cabi­net de­cided, on the ba­sis of ad­vice from offi­cials, that ex­ist­ing reg­u­la­tion was suffi­cient, and the only thing that needed to be done was put a lock­able door on the cabin, which was be­ing phased in already. Would you re­gard this as a good policy de­ci­sion?

I’d ques­tion the need to have gov­ern­ment in­volved in the de­ci­sion at all. Why not let the air­lines de­cide their own se­cu­rity poli­cies?

At least three rea­sons:

Be­cause air­lines have these large ob­jects that can func­tion as mis­siles and bring down build­ings. So failing to se­cure them harms lots of other peo­ple.

As with other in­dus­tries, in­di­vi­d­u­als do not have the re­sources to make de­tailed judg­ments them­selves about safety pro­ce­dures. This is similar to the need for gov­ern­ment in­spec­tion and reg­u­la­tion of drugs and food.

Vio­la­tion of se­cu­rity pro­ce­dures is (for a va­ri­ety of good rea­sons) a crim­i­nal offense. In or­der for that to make any sense, you need the gov­ern­ment to have some han­dle in what pro­ce­dures do and do not make sense.

The first two rea­sons only jus­tify re­quiring that air­lines carry li­a­bil­ity in­surance poli­cies against the ex­ter­nal dam­age that can be caused by by their planes and in­juries/​deaths of pas­sen­gers. Then, the in­surer would spec­ify what pro­to­cols air­lines must fol­low be­fore the in­surer will offer an af­ford­able policy. Pas­sen­gers would not have to make such judg­ments in that case.

ETA: Ac­tu­ally, you know what? This has de­volved into a poli­ti­cal de­bate. Not cool. Can we wind this down? (To avoid the ob­vi­ous ac­cu­sa­tion, any­one can feel free to re­ply to my ar­gu­ments here and I won’t re­ply.)

Well, my gen­eral ap­proach is to think that we should con­tinue poli­ti­cal dis­cus­sions as long as they are not in­di­cat­ing mind-kil­ling. For ex­am­ple, I find your point about li­a­bil­ity in­surance to be very in­ter­est­ing, and not one I had thought about be­fore. It is cer­tainly worth think­ing about, but even then, that’s a differ­ent type of reg­u­la­tion, not a lack of reg­u­la­tion as a whole.

Well, my gen­eral ap­proach is to think that we should con­tinue poli­ti­cal dis­cus­sions as long as they are not in­di­cat­ing mind-kil­ling.

If it’s not there in your judg­ment then, I’ll con­tinue.

For ex­am­ple, I find your point about li­a­bil­ity in­surance to be very in­ter­est­ing, and not one I had thought about be­fore. It is cer­tainly worth think­ing about, but even then, that’s a differ­ent type of reg­u­la­tion, not a lack of reg­u­la­tion as a whole.

Yes, but it cer­tainly makes a differ­ence in how many choices and al­ter­na­tives reg­u­la­tion chokes off. Even if you be­lieve in reg­u­la­tion as a nec­es­sary evil, you should fa­vor the kind that ac­com­plishes the same re­sult with less in­tru­sion. And there’s a big differ­ence be­tween “Fol­low this spe­cific fed­eral code for air­line se­cu­rity”, ver­sus “Do any­thing that con­vinces an in­surer to un­der­write you for a lot of po­ten­tial dam­ages.”

Similarly, when it comes to re­strict­ing car­bon emis­sions, it makes much more sense to as­sign a price or scarcity to the emis­sions them­selves, rather than try to reg­u­late loose cor­re­lates, such as ban­ning prod­ucts that some­one has deemed “in­effi­cient”.

If you con­sider all that ob­vi­ous, then you should un­der­stand my frus­tra­tion when liber­tar­i­ans have to pull teeth to get peo­ple to agree to mere sim­plifi­ca­tions of reg­u­la­tion like I de­scribe above.

Yeah, no dis­agree­ment with those points. (Although now think­ing more about the use of in­surance un­der­writ­ing there may be a prob­lem get­ting large enough in­surance. For ex­am­ple, in some ar­eas there have been home in­surance com­pa­nies that went bankrupt af­ter ma­jor nat­u­ral dis­asters and didn’t have enough money to pay ev­ery­thing out. One could see similar prob­lems oc­cur­ring when one has po­ten­tial loss in the multi-billion dol­lar range.)

I think that pro­vides much more effec­tive over­sight of risk al­lo­ca­tion than any reg­u­la­tion.

No, it didn’t. Did you miss the part where Lloyds im­ploded, and the un­limited li­a­bil­ity de­stroyed scores of lives (and caused mul­ti­ple suicides)? The ‘rein­surance spiral’ cer­tainly was not effec­tive over­sight. Even count­ing the Names’ net worth, Lloyds had less re­serves and greater risk ex­po­sure than reg­u­lar cor­po­rate in­surance gi­ants that it com­peted with, like Swiss Re and Mu­nich Re.

EDIT: It oc­curs to me that the ob­vi­ous re­but­tal is that Lloyds was quite prof­itable for a cen­tury or two, and so we shouldn’t hold the as­bestos dis­aster against it. But it seems to me that any fool can ca­pa­bly in­sure against risks that even­tu­ate ev­ery month or year; high qual­ity risk man­age­ment is known from how well the ex­tremely rare events are han­dled.

Their li­a­bil­ity is still limited by the laws re­gard­ing per­sonal bankruptcy. You can’t pay back money you don’t have. (In the old days, there was debtor’s prison, but that re­ally doesn’t help any­one.)

Some liber­tar­i­ans op­pose limited li­a­bil­ity for share­hold­ers of cor­po­ra­tions be­cause it dis­torts the in­cen­tives to re­duce the risk of harm to third par­ties. I tend to lean in that di­rec­tion al­though I can see the merit in some ar­gu­ments in favour of limited li­a­bil­ity.

Ah yes, the or­tho­dox doc­trine of the Church of Un­limited Govern­ment. I’m a heretic and don’t ac­cept any of these as self ev­i­dent. I find it in­ter­est­ing that it doesn’t even oc­cur to most peo­ple to ask the ques­tion whether any given is­sue should even be con­sid­ered as a le­gi­t­i­mate con­cern of gov­ern­ment. From the sec­ond link (em­pha­sis mine):

Do you re­mem­ber the flap re­cently about the air­line that was go­ing to charge for carry-on lug­gage? And then a Con­gress­man said we need to pass a law say­ing that the air­lines can­not do that? Now, the mer­its of the is­sue are de­bat­able (as a pas­sen­ger, I think I might ac­tu­ally pre­fer to fly on an air­line that charges for carry-on lug­gage), but that is not the point. Even if we all felt re­ally strongly that charg­ing for carry-on lug­gage is evil, are we will­ing to say that gov­ern­ment should stay out of the is­sue, on prin­ci­ple? The liber­tar­ian says that in­deed the gov­ern­ment should stay out of it. The mem­ber of the Church does not. Again, be­ing ok with gov­ern­ment stay­ing out of it gets you liber­tar­ian points only if you care about the is­sue. If you are am­biva­lent about charg­ing for carry-on lug­gage or you think it’s a re­ally minor is­sue, then it’s not in the set of so­cial prob­lems that you feel are im­por­tant.

I bring up the carry-on lug­gage ex­am­ple be­cause to me it illus­trates the rel­a­tive strength of the forces for limited gov­ern­ment and the forces for un­limited gov­ern­ment. From my stand­point, the idea of reg­u­lat­ing the pric­ing of carry-on lug­gage is nutty as a fruit­cake. But it seemed perfectly nor­mal to most peo­ple—cer­tainly to most of our “thought lead­ers.” It seems to me that I be­long to the Dissent­ing Church, and the es­tab­lished church is the Church of Un­limited Govern­ment.

I’m not at all sure what any of this has to do with any­thing. I agree with the quoted sec­tion that hav­ing the gov­ern­ment step in to reg­u­late how much car­ryon lug­gage peo­ple can have is an ex­am­ple of peo­ple mak­ing bad as­sump­tions about gov­ern­ment. In­deed, this one is par­tic­u­larly stupid be­cause it is eco­nom­i­cally equiv­a­lent to charg­ing a higher price and then offer­ing a dis­count for peo­ple who don’t bring car­ryon lug­gage. And psych stud­ies show that if any­thing peo­ple re­act more pos­i­tively to things framed as a dis­count.

But I don’t see what this has to do with any­thing I listed. Can you ex­plain for ex­am­ple how the fact that air­planes are effec­tively large mis­siles is not a good rea­son for the gov­ern­ment to be con­cerned about their se­cu­rity? The use of air­planes as weapons is not fic­tional.

Similarly, re­gard­ing my sec­ond point are you claiming that peo­ple in gen­eral do have the time and re­sources to de­ter­mine if any given drug is safe or is even what it is claimed to be? I’m cu­ri­ous how other than gov­ern­ment reg­u­la­tion you in­tend to pre­vent peo­ple from dilut­ing drugs for ex­am­ples.

Edit: And hav­ing now read the es­says you linked to I have to say that I’m a bit con­fused. The no­tion that the US of all coun­tries has a re­li­gious be­lief in un­limited gov­ern­ment is difficult for me to un­der­stand. The US of­ten has far less reg­u­la­tion and gov­ern­ment in­ter­ven­tion than say most of Europe. So the claim that the US has a re­li­gion of “Un­limited Govern­ment” as a re­place­ment for an es­tab­lished re­li­gion clashes with the sim­ple fact that many coun­tries which do have es­tab­lished or semi-es­tab­lished re­li­gions still have far more gov­ern­ment in­ter­ven­tion. Mean­while, it seems that it is fre­quently poli­ti­cally helpful in the US to talk about “get­ting the gov­ern­ment off of peo­ples’ backs” or some­thing similar. So how the heck is this a re­li­gion in the US?

I’m not at all sure what any of this has to do with any­thing. I agree with the quoted sec­tion that hav­ing the gov­ern­ment step in to reg­u­late how much car­ryon lug­gage peo­ple can have is an ex­am­ple of peo­ple mak­ing bad as­sump­tions about gov­ern­ment.

This rather illus­trates my point. You can see the lack of jus­tifi­ca­tion for a fairly ex­treme ex­am­ple like the carry on lug­gage but can’t see how that re­lates to the ques­tion of air­line se­cu­rity. From my per­spec­tive the idea that gov­ern­ment should even be dis­cussing what to do about air­line se­cu­rity in the origi­nal ex­am­ple is at least as ridicu­lous as the lug­gage ex­am­ple is from your per­spec­tive.

Can you ex­plain for ex­am­ple how the fact that air­planes are effec­tively large mis­siles is not a good rea­son for the gov­ern­ment to be con­cerned about their se­cu­rity? The use of air­planes as weapons is not fic­tional.

Air­lines already have a strong eco­nomic in­cen­tive to take mea­sures to avoid hi­jack­ing and ter­ror­ist at­tacks, both due to the high cost of los­ing a plane and to the rep­u­ta­tional dam­age and pos­si­ble li­a­bil­ity claims re­sult­ing from pas­sen­ger deaths and from the de­struc­tion of the tar­get. I would ex­pect them to do a bet­ter job of de­vel­op­ing effi­cient se­cu­rity mea­sures to miti­gate these risks if gov­ern­ment were not in­volved and also to do a bet­ter job of trad­ing off in­creased se­cu­rity against in­creased in­con­ve­nience for trav­el­ers. There is ab­solutely no rea­son why a po­ten­tially dan­ger­ous ac­tivity ne­ces­si­tates gov­ern­ment in­volve­ment to miti­gate risks.

Similarly, re­gard­ing my sec­ond point are you claiming that peo­ple in gen­eral do have the time and re­sources to de­ter­mine if any given drug is safe or is even what it is claimed to be? I’m cu­ri­ous how other than gov­ern­ment reg­u­la­tion you in­tend to pre­vent peo­ple from dilut­ing drugs for ex­am­ples.

You can make the same ar­gu­ment with re­gard to many goods and ser­vices available in our com­plex mod­ern world. It is equally flawed when ap­plied to drugs as when ap­plied to com­put­ers, cars or fi­nan­cial prod­ucts. There is no rea­son why gov­ern­ment has to play the role of gate­keeper, guardian and guaran­tor. In mar­kets where gov­ern­ment in­volve­ment is min­i­mal other en­tities fill these roles quite effec­tively.

There is ab­solutely no rea­son why a po­ten­tially dan­ger­ous ac­tivity ne­ces­si­tates gov­ern­ment in­volve­ment to miti­gate risks.

Since the ital­ics are yours, I’m go­ing to fo­cus on that term and ask what you mean by ne­ces­si­tate? Do you mean so­ciety will in­evitably fall apart with­out it? Ob­vi­ously no one is go­ing to make that ar­gu­ment. Do you mean just that there are po­ten­tially ways to try to ap­proach the prob­lem other than the gov­ern­ment? That’s a much weaker claim.

You can make the same ar­gu­ment with re­gard to many goods and ser­vices available in our com­plex mod­ern world. It is equally flawed when ap­plied to drugs as when ap­plied to com­put­ers, cars or fi­nan­cial prod­ucts. There is no rea­son why gov­ern­ment has to play the role of gate­keeper, guardian and guaran­tor. In mar­kets where gov­ern­ment in­volve­ment is min­i­mal other en­tities fill these roles quite effec­tively.

Really? Cars are ex­ten­sively reg­u­lated. The failure of gov­ern­ment reg­u­la­tion is seen by many as part of the cur­rent fi­nan­cial crisis. And com­put­ers don’t (gen­er­ally) have the same fatal­ity con­cerns. What sort of in­sti­tu­tion would you re­place the FDA with ?

Since the ital­ics are yours, I’m go­ing to fo­cus on that term and ask what you mean by ne­ces­si­tate?

I mean that rec­og­niz­ing the ex­is­tence of a per­ceived prob­lem does not need to lead au­to­mat­i­cally to con­sid­er­ing ways that gov­ern­ment can ‘fix’ it. Drug pro­hi­bi­tion is a clas­sic ex­am­ple here. Many peo­ple see that there are prob­lems as­so­ci­ated with drug use and jump straight to the con­clu­sion that there­fore there is a need for gov­ern­ment to reg­u­late drug use. Not ev­ery prob­lem re­quires a gov­ern­ment solu­tion. The mind­set that all per­ceived prob­lems with the world ne­ces­si­tate gov­ern­ment con­ven­ing a com­mis­sion and de­vis­ing reg­u­la­tion is what I am crit­i­ciz­ing.

What sort of in­sti­tu­tion would you re­place the FDA with ?

I’d abol­ish the FDA but I wouldn’t re­place it with any­thing. That’s kind of the point. Peo­ple would still want in­de­pen­dent as­sess­ments of the safety and effi­cacy of med­i­cal treat­ments and with­out the crowd­ing out effects of a gov­ern­ment sup­ported monopoly there would be strong in­cen­tives for pri­vate in­sti­tu­tions to satisfy that de­mand. The fact that the na­ture of these in­sti­tu­tions would not be de­signed in ad­vance by gov­ern­ment but would evolve to meet the needs of the mar­ket is a fea­ture, not a bug.

I can kind of see how a pri­vate com­pany could test and re­comend/​ap­prove drugs, but what about snake oil sales men. No, this sys­tem wouldn’t work at all. To many peo­ple would die or be se­ri­ously hurt for no rea­son.

True and they wouldn’t de­serve it, but the truth is, there are a lot of re­ally awe­some effec­tive drugs that ei­ther take for­ever to get ap­proved, or don’t get ap­proved it at all. This kills peo­ple, too.

And there are a lot of dis­eases, like bron­chitis, that are easy for a per­son to di­ag­nose in them­selves, and know that they need an an­tibiotic, but it costs a hun­dred dol­lars to see a doc­tor to tell him what he already knows so he can get the medicine, and if that’s the differ­ence be­tween him pay­ing the rent or not… and, hy­po­thet­i­cally, he dies be­cause it goes un­treated.

It’s more a prop­blem of poli­ti­cal vi­a­bil­ity rather than any­thing else.

And there are a lot of dis­eases, like bron­chitis, that are easy for a per­son to di­ag­nose in them­selves, and know that they need an an­tibiotic,

And then they mis­di­ag­nose it, and an­tibiotic re­sis­tance in­creases, and then the an­tibiotic doesn’t work when they need it. Or they di­ag­nose it but miss a warn­ing sign for an­other dis­ease that a doc­tor would have no­ticed and tested for. No thanks, I’d much rather have peo­ple who have gone to med­i­cal school for years make that de­ci­sion.

The “peo­ple to be af­fected” are the gen­eral pub­lic, who suffer when con­ta­gious dis­eases aren’t treated prop­erly, and the gen­eral pub­lic makes these de­ci­sions through elected poli­ti­ci­ans. Also, these de­ci­sions are fre­quently based on recom­men­da­tions by ad­minis­tra­tors with de­grees in Public Health.

Some day I hope some­one with­out an axe to grind does an in-depth study es­ti­mat­ing how badly peo­ple would be harmed with drug reg­u­la­tion v. with­out drug reg­u­la­tion. I’ve seen the ‘yeah but reg­u­la­tion causes harms’ ver­sus ‘yeah but non-reg­u­la­tion causes harms’ ar­gu­ment be­fore, but I can’t re­mem­ber see­ing any­one try to rigor­ously and com­pre­hen­sively quan­tify the re­spec­tive pros and cons of both courses of ac­tion and com­pare them.

Have you looked at the aca­demic stud­ies on the topic? Are these the “axe-grind­ing” “ar­gu­ments” that you dis­miss? Sim­ple com­par­i­sons of the US vs Europe dur­ing times when one was sys­tem­at­i­cally more con­ser­va­tive seems to me to be a pretty rea­son­able method­ol­ogy, but maybe you don’t con­sider it “rigor­ous” or “com­pre­hen­sive.”

Maybe I’m over­do­ing the scare quotes, but those words were not helpful for me to iden­tify what you have looked at, whether our dis­agree­ment is due to your ig­no­rance or my lower stan­dards.

Have you looked at the aca­demic stud­ies on the topic? Are these the “axe-grind­ing” “ar­gu­ments” that you dis­miss?

I have not, and my com­ment was not in­tended to slam what­ever gen­uinely un­bi­ased aca­demic stud­ies of the topic there are.

My com­ment’s refer­ring to the times I’ve been a by­stan­der for ar­gu­ments about the util­ity of phar­ma­ceu­ti­cal drug reg­u­la­tion, both in real life and on­line; a pat­tern I no­ticed is the ar­guers failing to cite hard, quan­ti­ta­tive ev­i­dence or make an ar­gu­ment based on the num­bers. At best they might cite par­tic­u­lar claims from think tanks or other writ­ers/​groups with a poli­ti­cal agenda that would plau­si­bly bias the anal­y­sis.

So when I say I’ve seen the ar­gu­ment be­fore, I’m not think­ing of the ab­stract de­bate over whether what the FDA does is a net good or not, or par­tic­u­lar pieces of aca­demic work; I’m think­ing of con­crete oc­ca­sions where peo­ple have started ar­gu­ing about it in my pres­ence, and the failure of the peo­ple I’ve wit­nessed ar­gu­ing about it to pre­sent de­tailed ev­i­dence.

I haven’t tried to re­search the topic in de­tail, so I don’t know pre­cisely what ground the aca­demic stud­ies cover. At any rate, I didn’t mean to claim knowl­edge of the field and to im­ply that there aren’t any. I gen­uinely do just mean that I haven’t seen them, be­cause lay­men (in­clud­ing the par­ent posters in this sub­thread, at least so far) don’t men­tion them when they ar­gue about the is­sue. As I wrote be­fore, I added the ‘axe to grind’ warn­ing not as a pre­emp­tive slam on aca­demics, but be­cause I sus­pect there have already been some overtly par­ti­san analy­ses of the sub­ject, and I want to dis­cour­age peo­ple from sug­gest­ing them to me.

Sim­ple com­par­i­sons of the US vs Europe dur­ing times when one was sys­tem­at­i­cally more con­ser­va­tive seems to me to be a pretty rea­son­able method­ol­ogy, but maybe you don’t con­sider it “rigor­ous” or “com­pre­hen­sive.”

In this con­text, what I mean by ‘rigor­ously and com­pre­hen­sively’ is that the anal­y­sis should satisfy ba­sic stan­dards for causal in­fer­ence—all im­por­tant con­found­ing vari­ables should be ac­counted for, and so on. For ex­am­ple, it would not be ‘rigor­ous’ to just col­lect a list of coun­tries and com­pare the lifes­pan of those with an FDA-like ad­minis­tra­tion with those that don’t, be­cause there are al­most cer­tainly con­found­ing vari­ables in­volved, and it’s not clear that lifes­pan is a suit­ably rele­vant over­come vari­able. We might pick a more suit­able out­come vari­able and use a re­gres­sion to try con­trol­ling for one or two con­founders, but we still wouldn’t have a ‘com­pre­hen­sive’ anal­y­sis with­out a list of all of the sig­nifi­cant con­found­ing vari­ables, and a way to ad­just for them or vi­ti­ate their effects.

One rigor­ous and com­pre­hen­sive way to eval­u­ate the ques­tion, al­though not a very re­al­is­tic one, would be a global ran­dom­ized trial. We might agree on a set of out­come vari­ables, care­fully mea­sure them in ev­ery coun­try in the world, ran­domly as­sign half the coun­tries to hav­ing an FDA and the other half no FDA, and then come back af­ter a pre-agreed num­ber of years to re-mea­sure the out­come vari­ables and check for an effect in the coun­tries with an FDA.

Now of course we don’t have that dataset, so if we want ev­i­dence we have to make do with what we have, per­haps by com­par­ing the US and Europe as you men­tion. That could be a pretty good way to test for a pos­i­tive/​nega­tive effect of drug reg­u­la­tion, or it could be a pretty bad way, but I’d need to hear more de­tails about the pre­cise method to say.

Maybe I’m over­do­ing the scare quotes, but those words were not helpful for me to iden­tify what you have looked at, whether our dis­agree­ment is due to your ig­no­rance or my lower stan­dards.

I’m not sure what you be­lieve we’re dis­agree­ing about. I think you might have got­ten the wrong im­pres­sion of my in­ten­tions—I wasn’t try­ing to score points off Ro­manDavis or Houshalter or mat­tnew­port or any­one else in this thread, or im­ply that drug reg­u­la­tion is ob­vi­ously good/​bad and only an axe grinder could think oth­er­wise. At any rate, if you have cita­tions for aca­demic stud­ies you think I’d find in­for­ma­tive, I’d like them.

The dis­agree­ment was just that you seemed to say (by the phras­ing “some day”) that there had not been any good work on the sub­ject.

The only such pa­per I re­mem­ber read­ing is Gier­inger. That link is to a whole bibliog­ra­phy, com­piled by peo­ple with a definite slant, so I can’t guaran­tee that there aren’t con­tra­dic­tory pa­pers with equally good method­ol­ogy.

I gen­uinely do just mean that I haven’t seen them, be­cause lay­men (in­clud­ing the par­ent posters in this sub­thread, at least so far) don’t men­tion them when they ar­gue about the is­sue.

I’m re­minded of Bruce Bueno de Mesquita, who gives the im­pres­sion of hav­ing fabri­cated the pa­pers as­sess­ing him, but they’re real.

Fair enough. Thanks for the Gier­inger 1985 cite; it’s 25 pages long so I haven’t read it yet, but skim­ming through it I see a cou­ple of quan­ti­ta­tive ta­bles, which is a good sign, and that it was pub­lished in the Cato Jour­nal, which is not such a good sign. But it’s some­thing!

I had no­ticed that you said that. I was origi­nally not go­ing to draw at­ten­tion to the pa­per’s source, but it oc­curred to me that some­one might then have asked me whether I was aware of the pa­per’s source, refer­ring to my ear­lier claim that I wanted to dis­cour­age peo­ple from offer­ing me overtly par­ti­san analy­ses. So I de­cided to pre-empt that pos­si­ble con­fu­sion/​ac­cu­sa­tion by ac­knowl­edg­ing the pa­per’s ori­gin from a liber­tar­ian-lean­ing jour­nal.

Yeah, I was think­ing of bring­ing up ex­am­ples my­self, but be­cause of the var­i­ous axes in­volved, bring­ing one up might not be ter­rible effec­tive.

Another per­son (I think it was cousin_it) brought up the idea that it should come down to a bet. If we bet ten dol­lars, and one of us kept ar­gu­ing af­ter the ev­i­dence was in and the bet was lost all it would come down to is, “If you’re so smart, why aren’t you rich?”

EDIT:Also some­one went and down voted the crap out of me. Who’d I make mad and why?

Yeah, I was think­ing of bring­ing up ex­am­ples my­self, but be­cause of the var­i­ous axes in­volved, bring­ing one up might not be ter­rible effec­tive.

Yup. I thought of the ‘with­out an axe to grind’ pro­viso be­cause I ex­pect some poli­ti­cally-al­igned think tanks out there have already pub­lished pam­phlets or re­ports ar­gu­ing one side or the other, but I wouldn’t be in­clined to take their claims very se­ri­ously.

EDIT:Also some­one went and down voted the crap out of me. Who’d I make mad and why?

Yes, it looks like al­most all the com­ments re­lated to the gov­ern­ment policy is­sue got down­voted. This is an­noy­ing in that, I at least thought that it was a calm, ra­tio­nal dis­cus­sion which was show­ing that poli­ti­cal dis­cus­sion isn’t nec­es­sar­ily mind-kil­ling. I’m par­tic­u­larly per­plexed by the down­vot­ing of com­ments which con­sisted of ei­ther in­ter­est­ing non-stan­dard ideas or of com­ments which in­cluded ev­i­dence of claims.

I hope some­one with­out an axe to grind does this; if there are axes in­volved, its much more likely to turn out sup­port­ing what­ever the per­son thought be­fore, i. e. not strongly cor­re­lated with how peo­ple are hurt or helped by regulation

Are you con­sid­er­ing the other side of the ledger? The peo­ple de­prived of po­ten­tially life sav­ing new treat­ments be­cause they have not yet been ap­proved? The in­no­va­tive new med­i­cal com­pa­nies that never get started be­cause of the bar­ri­ers to en­try formed by the reg­u­la­tory agen­cies and the big phar­ma­ceu­ti­cal com­pa­nies who know how to nav­i­gate their rules? The new treat­ments for rare dis­eases that are never de­vel­oped be­cause the mar­ket is too small to jus­tify the costs of gain­ing reg­u­la­tory ap­proval? The effec­tive anti-ven­oms already used suc­cess­fully in other coun­tries that are not available to treat rare snake bites in the US be­cause FDA ap­proval is too oner­ous?

The FDA doesn’t even have a perfect track record achiev­ing its stated aims. As with any large gov­ern­ment agency, pri­vate al­ter­na­tives would be more cost effec­tive and bet­ter at the job.

Alter­na­tive medicine used to be much more closely reg­u­lated. A lot of these prod­ucts were more closely reg­u­lated un­til lob­by­ing by the al­ter­na­tive medicine in­dus­try lead to the Die­tary Sup­ple­ment Health and Ed­u­ca­tion Act of 1994 which made it much harder for the FDA to reg­u­late them.

So how do you feel about the gov­ern­ment reg­u­lat­ing what credit card is­suers or in­sur­ers are al­lowed to offer? I see this as similar to the carry-on lug­gage is­sue. I don’t want credit card com­pa­nies to be al­lowed to offer mis­lead­ing rates or un­fair poli­cies like pay­ing off the low­est in­ter­est rates first. I’m not sure about carry-on lug­gage, but what about charg­ing for a bath­room? That seems clearly within the scope of le­gi­t­i­mate con­cerns of gov­ern­ment, given that air travel is already heav­ily reg­u­lated.

I think there are some credit card prac­tices that could be framed as fraud (You can change my in­ter­est rate with­out tel­ling me? And with­out tel­ling me you won’t tell me? Se­ri­ously? What the hell?) so the gov­ern­ment would have to be in­volved even in a strict liber­tar­ian so­ciety, but I never like where this is go­ing.

Liber­tar­i­anism, as a poli­ti­cal con­cept was an idea in­vented by David Nolan to suit his poli­ti­cal the­o­ries. He had a chart, and a quar­ter of it is var­i­ous types of liber­tar­i­ans.

If you like more so­cial liber­ties than the Amer­i­can cen­ter, and more eco­nomic liber­ties, and are will­ing to forgo some amount (even a small amount) of gov­ern­ment ser­vices and pro­tec­tions to achieve them, then you are some where on that quar­ter of the map. You don’t nec­es­sar­ily have to be way off in the cor­ner with the an­ar­chists or defend ev­ery idea they have.

That seems clearly within the scope of le­gi­t­i­mate con­cerns of gov­ern­ment, given that air travel is already heav­ily regulated

This ar­gu­ment doesn’t work. Just be­cause you already have heavy reg­u­la­tion, doesn’t jus­tify hav­ing more reg­u­la­tion. Also, many liber­tar­i­ans would say that the solu­tion should be to sim­ply re­move much of the heavy reg­u­la­tion of air travel.

This ar­gu­ment doesn’t work. Just be­cause you already have heavy reg­u­la­tion, doesn’t jus­tify hav­ing more reg­u­la­tion.

Well, it doesn’t by it­self jus­tify more reg­u­la­tion, but it makes ad­di­tional reg­u­la­tion less bur­den­some. If trains were not reg­u­lated and planes were, it might be rea­son­able to add reg­u­la­tion of bath­rooms to plane reg­u­la­tions, but not to in­tro­duce reg­u­la­tion to trains so we could reg­u­late bath­rooms.

Also, many liber­tar­i­ans would say that the solu­tion should be to sim­ply re­move much of the heavy reg­u­la­tion of air travel.

So ba­si­cally it all comes down to “Should the gov­ern­ment worry about this or not?” Is there any good heuris­tics or prin­ci­ples for de­ter­min­ing wether or not the gov­ern­ment should reg­u­late some­thing? I’m not up­set at the sys­tem for be­ing wrong per se, but I am up­set about it be­ing so in­con­sis­tent and un­re­li­able.

Have you read Taleb’s The Black Swan? He has a coun­ter­fac­tual story that is ex­tremely similar (though it uses 9/​11); ba­si­cally there aren’t any (even nega­tive) in­cen­tives for poli­ti­ci­ans to push such poli­cies through un­til af­ter some huge dis­aster hap­pens.

I haven’t read Taleb, but I have heard a few in­ter­views of him where he got the op­por­tu­nity to out­line his ideas.

I think poli­ti­ci­ans in gen­eral have a ten­dency to over­re­act to ad­verse events, and of­ten by do­ing things that in­volve sig­nals of re­as­surance (such as se­cu­rity the­atre) rather than steps to fix the prob­lem. I’m open to the pos­si­bil­ity that they don’t do enough to pre­vent prob­lems, but as a rule gov­ern­ments are very risk averse en­tities, usu­ally pre­oc­cu­pied with things that might go wrong.

In what way is this a use­ful re­sponse to James_K? What do you be­lieve James_K is do­ing that he shouldn’t be do­ing (or vice-versa), such that your com­ment is likely to lead him to­ward bet­ter ac­tion?

Note that gen­eral Less Wrong con­sen­sus is that re­li­gion in al­most all forms is very wrong. It is a safe op­er­at­ing as­sump­tion to work with on LW, in that you don’t need to go through the logic ev­ery­time to jus­tify it. it prob­a­bly isn’t as safe a start­ing point as say the wrong­ness of a flat-earth, or the wrong­ness of phlo­gis­ton, but it is pretty safe.

This is not a site that de­votes a whole lot of space to de­bat­ing re­li­gion. Peo­ple aren’t get­ting mean so much as they’re us­ing short­hand. It can save time, for athe­ists, not to ex­plain why they’re athe­ists over and over. Hence the links. The se­quences are a pretty good ex­pres­sion of why the ma­jor­ity around here is athe­ist. They’re the ex­pan­sion of the short­hand. If you’re any­thing like me, read­ing them will prob­a­bly move some of your men­tal fur­ni­ture around; even if not, you’ll talk the lingo bet­ter.

Chill with the down­votes, guys. Houshalter’s new, looks to be par­ti­ci­pat­ing well in other threads, and is just stat­ing a be­lief for the first time.

Houshalter, this is a tan­gent to the cur­rent… tan­gent. It might be bet­ter to dis­cuss the­ism in its own Open Thread com­ment or within a past dis­cus­sion on the topic.

On a re­lated note, have you looked through the Mys­te­ri­ous An­swers to Mys­te­ri­ous Ques­tions se­quence yet? Not to throw a short book’s worth of stuff at you, but there’s a lot of stuff taken for granted around here when dis­cussing the­ism, the su­per­nat­u­ral, and ev­i­dence for such.

Chill with the down­votes, guys. Houshalter’s new, looks to be par­ti­ci­pat­ing well in other threads, and is just stat­ing a be­lief for the first time.

Uh… thanks?

Houshalter, this is a tan­gent to the cur­rent… tan­gent. It might be bet­ter to dis­cuss the­ism in its own Open Thread com­ment or within a past dis­cus­sion on the topic.

I have de­bated my re­li­gion be­fore, but iron­i­cally this looks like a bad place to make a stand be­cause ev­ery­ones against me and theres a karma sys­tem.

On a re­lated note, have you looked through the Mys­te­ri­ous An­swers to Mys­te­ri­ous Ques­tions se­quence yet? Not to throw a short book’s worth of stuff at you, but there’s a lot of stuff taken for granted around here when dis­cussing the­ism, the su­per­nat­u­ral, and ev­i­dence for such.

D: GAHHH!!! D: Hun­dreds of links to pages that con­tain hun­dreds of more links. D:

I have de­bated my re­li­gion be­fore, but iron­i­cally this looks like a bad place to make a stand be­cause ev­ery­ones against me and theres a karma sys­tem.

Don’t take the ad­ver­sar­ial at­ti­tude: “tak­ing a stand”, “against me”. This leads to a bro­ken mode of thought. Just study the con­cepts that will al­low you to cut through se­man­tic stop­signs and de­cide for your­self. Tak­ing ad­vice on an effi­cient way to learn may help as well.

Chill with the down­votes, guys. Houshalter’s new, looks to be par­ti­ci­pat­ing well in other threads, and is just stat­ing a be­lief for the first time.

Uh… thanks?

Oc­ca­sion­ally some­one will show up here and try to flame-bait us, not re­ally ar­gu­ing (or not re­spond­ing to coun­ter­ar­gu­ments) but just try­ing to pro­voke peo­ple with con­trary opinions. (This is, af­ter all, the In­ter­net.) It’s ob­vi­ous from your other con­tri­bu­tions that you’re not do­ing that, but some­one who’d only seen your two com­ments above might have wrongly as­sumed oth­er­wise. I was ex­plain­ing why the down­votes should be taken back, as it ap­pears they were.

By the way, the main­stream view among Less Wrong read­ers is that any ev­i­dence we’ve seen for the­ism is far too weak to over­come the prior im­prob­a­bil­ity of such a sneak­ily com­plex hy­poth­e­sis (and that much of the ev­i­dence that we might ex­pect from such a hy­poth­e­sis is ab­sent); but there are a few gen­er­ally re­spectedthe­ists around here. The com­mu­nity norm on the­ism has more to do with how peo­ple con­duct them­selves in dis­putes than with the fact of dis­agree­ment— but you should be pre­pared for a lot of us to talk amongst our­selves as if athe­ism is a set­tled ques­tion, and not be too offended by that. (Con­sider it a role re­ver­sal from an athe­ist’s so­cial in­ter­ac­tions with typ­i­cal Amer­i­cans.)

I’ve en­joyed my ex­changes with you so far, and look for­ward to more!

It’s con­sid­ered poor form to delete a post or com­ment on LW, since it makes it im­pos­si­ble to tell what the replies were talk­ing about. (Also, it doesn’t re­store the karma.)

What’s prefer­able, if one re­grets a com­ment, is to edit it in a man­ner that keeps it clear what the origi­nal com­ment was, or to add a dis­claimer. Here’s one ex­am­ple— note that if cousin_it had just deleted the post, it would be more difficult to un­der­stand the com­ments on it.

It might be bet­ter to just spend some time read­ing the se­quences. A lot of peo­ple here like my­self dis­agree with the LW con­sen­sus views on a fair num­ber of is­sues, but we have a care­ful enough un­der­stand­ing of what those con­sen­sus views are to know when to be ex­plicit about what as­sump­tions and what meth­ods of rea­son­ing we are us­ing.

I have de­bated my re­li­gion be­fore, but iron­i­cally this looks like a bad place to make a stand be­cause ev­ery­ones against me and theres a karma sys­tem.

Awwwww, I’m not against you. I just think you’re in­cor­rect.

If you post on Less Wrong a lot, you’ll even­tu­ally say some­thing sev­eral posters will dis­agree with, and some of them will say so. Try not to in­ter­pret it as a per­sonal at­tack—tak­ing it per­son­ally makes it harder to ra­tio­nally eval­u­ate new ar­gu­ments and ev­i­dence.

I wouldn’t ex­pect the karma sys­tem to be much of a prob­lem, by the way. If I re­mem­ber rightly, your karma can’t go be­low 0, so you can con­tinue post­ing com­ments even if it falls to zero.

So it is. On the bright side, it looks like your karma loss is from get­ting down­voted on quite a lot of com­ments (about a dozen over the past 4 days, it looks like) rather than ar­gu­ing about God as such. And I see you can still post. :-)

I have a the­ory: Su­per-smart peo­ple don’t ex­ist, it’s all due to se­lec­tion bias.

It’s easy to think some­one is ex­tremely smart if you’ve only seen the sam­ple of their most in­sight­ful think­ing. But ev­ery time that hap­pened to me, and I found that such a promis­ing per­son had a blog or some­thing like that, it uni­ver­sally took very lit­tle time to find some­thing ter­ribly brain-hurt­ful they’ve writ­ten there.

So the null hy­poth­e­sis is: there’s a large pop­u­la­tion of fairly-smart-but-noth­ing-spe­cial peo­ple, who think and pub­lish their thought a lot. Be­cause the best thoughts get dis­tributed, and av­er­age and worse thoughts don’t, it’s very easy from such small bi­ased sam­ples to be­lieve some of them are far smarter than the rest, but their av­er­ages are pretty much the same.

(feel free to re­place “smart” by “ra­tio­nal”, the re­sult is iden­ti­cal)

Some peo­ple think out loud. Some peo­ple don’t. Smart peo­ple who think out loud are per­ceived as “witty” or “clever.” You learn a lot from be­ing around them; you can even imi­tate them a lit­tle bit. They’re a lot of fun. Smart peo­ple who don’t think out loud are per­ceived as “ge­niuses.” You only ever see the finished product, never their thought pro­cesses. Every­thing they pro­duce is handed down com­plete as if from God. They seem dumber than they are when they’re quiet, and smarter than they are when you see their work, be­cause you have no win­dow into the way they think.

In my ex­pe­rience, there are far more peo­ple who don’t think out loud in math than in less quan­ti­ta­tive fields. This may be part of why math is per­ceived as so hard; there are all these smart peo­ple who are hard to learn from, be­cause they only re­veal the finished product and not the rough draft. Rough drafts make things look fea­si­ble. Reg­u­lar smart peo­ple look like ge­niuses if they leave no rough drafts. There may re­ally be peo­ple who don’t need rough drafts in the way that we mun­danes do—I’ve heard of his­tor­i­cal figures like that, and those re­ally are sa­vants—but it’s pos­si­ble that some peo­ple’s “ge­nius” is over­stated just be­cause they’re cagey about ex­press­ing half-formed ideas.

You may be right about math. Read­ing the Poly­math re­search threads (like this one) made me aware that even Terry Tao thinks in small and well-un­der­stood steps that are just slightly bet­ter in­formed than those of the av­er­age math­e­mat­i­cian.

I’m not a psy­chol­o­gist but I thought I could im­prove on the vague­ness of the origi­nal dis­cus­sion.

There are a few fac­tors which de­ter­mine “smart­ness” (or po­ten­tial for suc­cess):

Speed. Hav­ing faster hard­ware.

Pat­tern Recog­ni­tion. Be­ing bet­ter at “chunk­ing”.

Me­mory.

Creativity. (=”di­ver­gent” think­ing.)

De­tail-aware­ness.

Ex­pe­rience. Hav­ing in­cor­po­rated many rou­tines into the sub­con­scious thanks to ex­ten­sive prac­tice.

Knowl­edge. (Qual­ity is more im­por­tant than quan­tity.)

The first five traits might be con­sid­ered part of some­one’s “tal­ent.” Ex­pe­rience and knowl­edge, which I’ll group to­gether as “train­ing”, must be gained through hard work. Po­ten­tial for suc­cess is de­ter­mined by a ge­o­met­ric (rather than ad­di­tive) com­bi­na­tion of tal­ent and train­ing: that is, roughly,

po­ten­tial for suc­cess=tal­ent * training

All this math, of course, is not re­motely in­tended to be taken at face value, but it’s merely the most effi­cient way to make my point.

The “su­per-smart” start life with more tal­ent than av­er­age. The rule of the bell curve holds, so they gen­er­ally do not have an over­whelming cog­ni­tive ad­van­tage over the av­er­age per­son. But they have enough tal­ent to jus­tify in­vest­ing much more of their re­sources into train­ing. This is be­cause a per­son with 15 tal­ent will gain 15 suc­cess for ev­ery unit of time they put into train­ing, while a unit of train­ing is worth 17 suc­cess for a per­son with 17 tal­ent. The less time you have to spend, the more time costs, so all other things be­ing equal, the per­son with more tal­ent will put more time into train­ing. Sup­pose the per­son with 15 tal­ent puts 100 units of time into train­ing, and the per­son with 17 tal­ent puts 110 units of time into train­ing. Then:

per­son with 15 tal­ent * 100 train­ing ⇒ 15000 success

per­son with 17 tal­ent * 110 train­ing ⇒ 18700 success

Which is 25% more suc­cess for only 13% more tal­ent.

There’s prob­a­bly some more for­mal work done along these lines, I’m not an economist ei­ther.

If you’re in­ter­pret­ing “su­per-smart” to mean always right, or at least rea­son­able, and thus never severely wrong-headed, I think you’re cor­rect that no one like that ex­ists, but it seems like a rather comic book­ish idea of su­per-smart­ness.

Also, I have no idea how good your judg­ment is about whether what you call brain-hurt­ful is ac­tu­ally ideas I’d think were egre­giously wrong.

I think there are a lot of folks smart enough to be spe­cial peo­ple—those who come up with worth­while in­sights fre­quently.

And even if it’s just a mat­ter of gen­er­at­ing lots of ideas and then pub­lish­ing the best, rec­og­niz­ing the best is a worth­while skill. It’s con­ceiv­able that idea-gen­er­a­tion and idea-rec­og­niz­ing are done by two peo­ple who to­gether give the im­pres­sion of one per­son who’s smarter than ei­ther of them.

I think my com­ment was rather vague, and peo­ple aren’t sure what I meant.

This is all my im­pres­sions, as far as I can tell ev­i­dence of all that is rather un­der­whelming; I’m writ­ing this more to ex­plain my thought than to “prove” any­thing.

It seems to me that peo­ple come in differ­ent level of smart­ness. There are some peo­ple with all sort of prob­lems that make them in­ca­pable of even hu­man nor­mal, but let’s ig­nore them en­tirely here.

Then, there are nor­mal peo­ple who are pretty much in­ca­pable of origi­nal highly in­sight­ful thought, crit­i­cal think­ing, ra­tio­nal­ity etc. They can usu­ally do OK in nor­mal life, and can even be quite ca­pa­ble in their nar­row area of ex­per­tise and that’s about it. They of­ten make the most ba­sic logic mis­takes etc.

Then there are “smart” peo­ple who are ca­pa­ble of origi­nal in­sight, and don’t get too stupid too of­ten. They’re not mea­sur­ing ex­am­ple the same thing, but IQ tests are ca­pa­ble of dis­t­in­guish­ing be­tween those and the nor­mal peo­ple rea­son­ably well. With smart peo­ple both their top perfor­mance and their av­er­age perfor­mance is a lot bet­ter than with av­er­age peo­ple. In spite of that, all of them very of­ten fail ba­sic ra­tio­nal­ity for some par­tic­u­lar do­mains they feel too strongly about.

Now I’m con­flicted if peo­ple who are so much above “smart” as “smart” is above nor­mal re­ally ex­ists. A canon­i­cal ex­am­ple of such per­son would be Feyn­man—from my limited in­for­ma­tion he seems to be just so ridicu­lously smart. Eliezer seems to be­lieve Ein­stein is like that, but I have even less in­for­ma­tion about him. You can prob­a­bly think of a few such other peo­ple.

Un­for­tu­nately there’s a sec­ond ob­ser­va­tion—there’s no rea­son to be­lieve such peo­ple ex­isted only in the past, or would have aver­sion to blog­ging—so if su­per-smart peo­ple ex­ist, it’s fairly cer­tain that some blogs of such peo­ple ex­ist. And if such blogs ex­isted, I would ex­pect to have found a few by now.

And yet, ev­ery time it seemed to me that some­one might just be that smart and I started read­ing their blog—it turned out very quickly that my es­ti­mate of their smart­ness suffered from rapid re­gres­sion to the mean. All my su­per-smart can­di­dates man­aged to say such hor­rible things, and be deaf to such ob­vi­ous ar­gu­ments that I doubt any of them re­ally qual­ifies.

So here’s an al­ter­na­tive the­ory. No hu­man al­ive is much smarter than the “nor­mally smart”. Of pop­u­la­tion of nor­mally smart peo­ple, thanks to do­main ex­per­tise, wit and writ­ing skill, com­pat­i­bil­ity with my be­liefs (or at least hap­pen­ing to avoid my red flags), higher pro­duc­tivity, luck etc. some peo­ple sim­ply seem much smarter than that.

I’m not trol­ling here, but con­sider Eliezer—I’ve picked the ex­am­ple be­cause it’s well known here. For some time he was ex­actly such a can­di­date, how­ever:

he is ridicu­lously good at writ­ing—just look at his fan­fics, bi­as­ing my perception

he man­ages to avoid many of my red flags, bi­as­ing my perception

he has cul­tural back­ground pretty similar to mine, bi­as­ing my perception

his writ­ing style is very good at avoid­ing un­war­ranted cer­tainty—this might seem more ra­tio­nal, but it’s re­ally more of a style is­sue—peo­ple like Eliezer and Tyler Cowen who write cau­tiously just seem far smarter to me than peo­ple like Robin Han­son who write in “no dis­claimer” style—even though I know very well that Robin is fully aware that con­trar­ian the­o­ries he pro­poses are usu­ally wrong, and there are usu­ally other fac­tors in ad­di­tion to one he hap­pens to write at the mo­ment—and says that ev­ery time he’s asked. Style differ­ences bias my per­cep­tion again.

Eliezer usu­ally man­ages to avoid writ­ing about things I know more than him about, so he usu­ally has ad­van­tage of ex­per­tise, bi­as­ing my per­cep­tion.

So it’s safe to guess that how­ever smart Eliezer is, I’m over­es­ti­mat­ing him—nearly all bi­ases point in iden­ti­cal way.

On the other hand he some­times makes ridicu­lously wrong state­ments, like his calcu­la­tions of cost of cry­on­ics which was blatantly or­der of mag­ni­tude off—I still don’t know if this was a mas­sive brain failure (this and other such dis­qual­ify­ing him as a su­per­s­mart can­di­date), or con­scious at­tempt at dark arts (in which case he might still qual­ify, but he loses points for other rea­sons).

On the other hand, and this pro­vides some counter-ev­i­dence to my the­ory—let’s look at my­self. I pub­lish any­thing on my blog and in com­ments ev­ery­where that seems to have ex­pected pub­lic value higher than zero, and very of­ten I’m in hurry /​ sleep-de­praved, or oth­er­wise far be­low my top perfor­mance. I ex­ag­ger­ate to get the point across very of­ten. I write out­side my area of ex­per­tise a lot, not un­com­monly mak­ing se­vere mis­takes. I’m not that good at writ­ing (not to men­tion that English is not my first lan­guage) so things I say may be very un­clear.

As you can see, I’m not even ter­ribly con­vinced that my “su­per-smart peo­ple don’t ex­ist” the­ory is true. I would love to see if other peo­ple have good ev­i­dence or in­sight one way or the other.

Another by-the-way: Very of­ten blatantly wrong be­lief might still be the least-wrong be­lief given some­one’s web of be­liefs. Often it’s eas­ier to be­lieve some minor wrong than to re­build your whole be­lief sys­tem risk­ing far more dam­age just to make some­thing small come out cor­rect. So per­haps even my test for be­ing re­ally re­ally wrong is not re­ally all that use­ful.

In­ter­est­ing picks. I hadn’t thought of Cosma Shal­izi as ‘su­per-smart’ be­fore, just eru­dite and with a bet­ter mem­ory for the books and pa­pers he’s read than me. Will have to think about that...

Then, there are nor­mal peo­ple who are pretty much in­ca­pable of origi­nal highly in­sight­ful thought, crit­i­cal think­ing, ra­tio­nal­ity etc. They can usu­ally do OK in nor­mal life, and can even be quite ca­pa­ble in their nar­row area of ex­per­tise and that’s about it. They of­ten make the most ba­sic logic mis­takes etc.

if su­per-smart peo­ple ex­ist, it’s fairly cer­tain that some blogs of such peo­ple ex­ist. And if such blogs ex­isted, I would ex­pect to have found a few by now.

Why would they blog? They would already know that most peo­ple have noth­ing of in­ter­est to tell them; and if they want to tell other peo­ple some­thing, they can do it through other chan­nels. If such a per­son had a blog, it might be for a very nar­row rea­son, and they would sim­ply re­frain from talk­ing about mat­ters guaran­teed to pro­duce noth­ing but time-con­sum­ing stu­pidity in re­sponse.

It doesn’t seem to me that you have an ac­cu­rate de­scrip­tion of what a su­per-smart per­son would do/​say other than match your be­liefs and pro­vid­ing in­sight­ful thought. For ex­am­ple, do you ex­pect su­per-smart peo­ple to be profi­cient in most ar­eas of knowl­edge or even able to quickly grasp the foun­da­tions of differ­ent ar­eas through su­per-ab­strac­tion? Would you ex­pect them to be mostly un­bi­ased? Your defi­ni­tion needs to be more ob­jec­tive and pre­dic­tive, in­stead of de­scrip­tive.

I don’t know what’s the cor­rect su­per-smart­ness cluster, so I can­not make ob­jec­tive pre­dic­tive defi­ni­tion, at least yet. There’s no need to suffer from physics envy here—a lot of use­ful knowl­edge has this kind of vague­ness. No­body man­aged to define “pornog­ra­phy” yet, and it’s far eas­ier con­cept than “su­per-smart­ness”. This kind of spec­u­la­tion might end up with some­thing use­ful with some luck (or not).

Even defin­ing by ex­am­ple would be difficult. My canon­i­cal ex­am­ples would be Feyn­man and Ein­stein—they seem far smarter than the “nor­mally smart” peo­ple.

Let’s say I col­lected a suffi­ciently large sam­ple of “peo­ple who seem su­per-smart”, got as ac­cu­rate in­for­ma­tion about them as pos­si­ble, and did a proper com­par­i­son be­tween them and back­ground of nor­mally smart peo­ple (it’s pretty easy to get good data on those, even by generic prox­ies like ed­u­ca­tion—so I’m least wor­ried about that) in a way that would be ro­bust against even large num­ber of data er­rors. That’s about the best I can think of.

Un­for­tu­nately it will be of no use as my sam­ple will be not ran­dom su­per-smart peo­ple but those su­per-smart peo­ple who are also suffi­ciently fa­mous for me to know about them and be aware of their su­per-smart­ness. This isn’t what I want to mea­sure at all. And I can­not think of any rea­son­able way to sep­a­rate these.

So the pro­ject is most likely doomed. It was in­ter­est­ing to think about this any­way.

I’m not sure that the abil­ity to have origi­nal thoughts is at all closely con­nected to the abil­ity to think ra­tio­nally. What makes you reach that con­clu­sion?

Un­for­tu­nately there’s a sec­ond ob­ser­va­tion—there’s no rea­son to be­lieve such peo­ple ex­isted only in the past, or would have aver­sion to blog­ging—so if su­per-smart peo­ple ex­ist, it’s fairly cer­tain that some blogs of such peo­ple ex­ist. And if such blogs ex­isted, I would ex­pect to have found a few by now.

Have you tried look­ing at Ter­ence Tao’s blog? I think he fits your model, but it may be that many of his posts will be too tech­ni­cal for a non-math­e­mat­i­cian. I’m not sure in gen­eral if blog­ging is a good medium for ac­tu­ally find­ing this sort of thing. It is easy to see if a blog­ger isn’t very smart. it isn’t clear to me that it is a medium that al­lows one to eas­ily tell if some­one is very smart.

I doubt your dis­proof of su­per-smart peo­ple, for the very same rea­sons you do, per­haps with a greater weight as­signed to those rea­sons.

I am also not sure about your defi­ni­tion of su­per-smart. Is idiot sa­vant (in math, say) su­per-smart? If you mean su­per-smart=con­sis­tently ra­tio­nal, I sus­pect noth­ing pre­vents peo­ple of nor­mal-smart IQ from scor­ing (su­per) well there, trad­ing off quan­tity of ideas for qual­ity. There is a ceiling there as good ideas get more com­plex and re­quire more pro­cess­ing power, but I sus­pect given how crazy this world is Norm Smart the Ra­tion­al­ist can score sur­pris­ingly highly on rel­a­tive ba­sis.

As a data point you might want to look at “Mon­ster Minds” chap­ter of Feyn­man’s “Surely you’re jok­ing”. Since you men­tioned Feyn­man. The chap­ter is about Ein­stein.

There is an im­por­tant sys­tem­atic bias you only tan­gen­tially men­tion in your anal­y­sis. Su­per-smart peo­ple (more gen­er­ally, very suc­cess­ful peo­ple) don’t feel they have to prove them­selves all the time. (Espe­cially if they are tenured. :) ) Many of them like to talk be­fore they think. There are very smart peo­ple around them who quickly spot the ob­vi­ous mis­takes and la­bo­ri­ously com­plete the half-baked ideas. It is just more eco­nomic this way.

My point is that I have trou­ble tel­ling the differ­ence be­tween a fairly-smart and su­per-smart per­son by their writ­ing for ex­actly the rea­son you men­tioned. But in-per­son con­ver­sa­tions give you ac­cess to the raw ma­te­rial and, if I take my­self to be fairly smart there are definitely su­per-smart peo­ple out there. For ex­am­ple, I imag­ine if you had got to talk­ing to Richard Feyn­man while he was al­ive you would have quickly re­al­ized he was a su­per-smart per­son.

I’m not sure about this. I have a lot of trou­ble dis­t­in­guish­ing be­tween just smart, su­per-smart, and smart-and-an-ex­pert-in-their-field. Dist­in­guish­ing them seems to not oc­cur eas­ily sim­ply based on quick in­ter­ac­tions. I can dis­t­in­guish peo­ple in my own field to some ex­tent, but if it isn’t my own area, it is much more difficult. Worse, there are se­ri­ous cog­ni­tive bi­ases about in­tel­li­gence es­ti­ma­tions. Peo­ple are more likely to think of some­one as smart if they share in­ter­ests and also more likely to think of some­one as smart if they agree on is­sues. (Ac­tu­ally I don’t have a cita­tion for this one and a quick Google search doesn’t turn it up, does some­one else maybe have a cita­tion for this?) One could imag­ine that many peo­ple might if meet­ing a near copy of them­selves con­clude that the copy was a ge­nius. That said, I’m pretty sure that there are at least a few peo­ple out there who rea­son­ably do qual­ify as su­per-smart. But to some ex­tent, that’s based more on their myr­iad ac­com­plish­ments than any per­sonal in­ter­ac­tion.

For­give me if this is beat­ing a dead horse, or if some­one brought up an equiv­a­lent prob­lem be­fore; I didn’t see such a thing.

I went through a lot of com­ments on dust specks vs. tor­ture. (It seems to me like the two sides were mis­com­mu­ni­cat­ing in a very spe­cific way, which I may at­tempt to make clear at some point.) But now I have an ex­am­ple that seems to be equiv­a­lent to DSvs.T, eas­ily un­der­stand­able via my moral in­tu­ition and give the “wrong” (i.e., not purely util­i­tar­ian) an­swer.

Sup­pose I have ten peo­ple and a stick. The ap­pro­pri­ate in­finitely pow­er­ful the­o­ret­i­cal be­ing offers me a choice. I can hit all ten of them with a stick, or I can hit one of them nine times. “Hit­ting with a stick” has some con­stant nega­tive util­ity for all the peo­ple. What do I do?

This seems to me to be ex­actly dust specks vs. tor­ture scaled down to hu­manly in­tu­itable scales. I think the ob­vi­ous an­swer is to hit all the peo­ple once. Ex­am­in­ing my in­tu­ition tells me that this is be­cause I think the ag­gre­ga­tion func­tion for util­ity is differ­ent across differ­ent peo­ple than across one per­son’s pos­si­ble fu­tures. Speci­fi­cally, my in­tu­ition tells me to max­i­mize across peo­ple the min­i­mum ex­pected utilty across an in­di­vi­d­ual’s fu­ture.

So, is there a name for this po­si­tion?

Do peo­ple think my ex­am­ple is equiv­a­lent to DSvsT?

Do peo­ple get the same or differ­ent an­swer with this ques­tion as they do with DSvsT?

DSvsT was not di­rectly an ar­gu­ment for util­i­tar­i­anism, it was an ar­gu­ment for trade­offs and quan­ti­ta­tive think­ing and against any kind of rigid rules, sa­cred val­ues, or qual­i­ta­tive think­ing which pre­vents trade­offs. For any two things, both of which have some nonzero value, there should be some point where you are will­ing to trade off one for the other—even if one seems wildly less im­por­tant than the other (like dust specks com­pared to tor­ture). Utili­tar­i­anism pro­vides a spe­cific an­swer for where that point is, but the DSvsT post didn’t ar­gue for the util­i­tar­ian an­swer, just that the point had to be at less than 3^^^3 dust specks. You would prob­a­bly have to be con­vinced of util­i­tar­i­anism as a the­ory be­fore ac­cept­ing its ex­act an­swer in this par­tic­u­lar case.

The stick-hit­ting ex­am­ple doesn’t challenge the claim about trade­offs, since most peo­ple are will­ing to trade off one per­son get­ting hit mul­ti­ple times with many peo­ple each get­ting hit once, with their choice de­pend­ing on the num­bers. In a sta­dium full of 100,000 peo­ple, for in­stance, it seems bet­ter for one per­son to get hit twice than for ev­ery­one to get hit once. Your al­ter­na­tive rule (max­imin) doesn’t al­low some trade­offs, so it leads to im­plau­si­ble con­clu­sions in cases like this 100,000x1 vs. 1x2 ex­am­ple.

I don’t think max­imis­ing the min­ima is what you want. Sup­pose your choice is to hit one per­son 20 times, or five peo­ple 19 times each. Un­less your in­tu­ition is differ­ent from mine, you’ll pre­fer the first op­tion.

There’s one differ­ence, which is that the in­equal­ity of the dis­tri­bu­tion is much more ap­par­ent in your ex­am­ple, be­cause one of the op­tions dis­tributes the pain perfectly evenly. If you value equal­ity of dis­tri­bu­tion as worth more than one unit of pain, it makes sense to choose the equal dis­tri­bu­tion of pain. This is similar to eco­nomic dis­cus­sions about poli­cies that lead to greater wealth, but greater eco­nomic in­equal­ity.

I think the point of Dust Specks Vs Tor­ture was scope failure. Even al­low­ing for some sort of “nega­tive marginal util­ity” once you hit a wacky num­ber 3^^^3, it doesn’t mat­ter. .000001 nega­tive util­ity point mul­ti­plied by 3^^^3 is worse than any­thing, be­cause 3^^^3 is wacky huge.

For the stick ex­am­ple, I’d say it would have to de­pend on a lot of fac­tors about hu­man psy­chol­ogy and such, but I think I’d hit the one. Marginal util­ity tends to go down for a product, and I think that the shock of re­peated blows would be less than the shock of the one against ten sep­a­rate peo­ple.

I think your opinion ba­si­cally is an ap­peal to egal­i­tar­i­anism, since you ex­pect nega­tive util­ity to your­self from an un­fair world where one per­son gets some­thing that ten other peo­ple did not, for no good or fair rea­son.

I think you’re mis­taken about the marginal util­ity—be­ing hit again af­ter you’ve already been in­jured (es­pe­cially if you’re hit on the same spot) is prob­a­bly go­ing to be worse than the first blow.

Marginal di­su­til­ity could plau­si­bly work in the op­po­site di­rec­tion from marginal util­ity.

Each 10% of your money that you lose im­pacts your qual­ity of life more. Each 10% of money that you gain im­pacts your qual­ity of life less. There might be thresh­old effects for both, but I think the di­rec­tion is right.

Part of the as­sump­tion of the prob­lem was that hit­ting with a stick has some con­stant nega­tive util­ity for all the peo­ple.

It’s always hard to think about this sort of thing. I read that in the origi­nal prob­lem, but then I ended up think­ing about ac­tual hit­ting peo­ple with sticks when de­cid­ing what was best. Is there any­thing in the archives like The True Pri­soner’s Dilemma but for giv­ing an in­tu­itive ver­sion of prob­lems with adding util­ity?

“Nick Bostrom’s Si­mu­la­tion Ar­gu­ment (SA) has many in­trigu­ing the­olog­i­cal im­pli­ca­tions. We work out some of them here. We show how the SA can be used to de­velop novel ver­sions of the Cos­molog­i­cal and De­sign Ar­gu­ments. We then de­velop some of the af­fini­ties be­tween Bostrom’s nat­u­ral­is­tic theogony and more tra­di­tional the­olog­i­cal top­ics. We look at the re­s­ur­rec­tion of the body and at theod­icy. We con­clude with some re­flec­tions on the re­la­tions be­tween the SA and Neo­pla­ton­ism (friendly) and be­tween the SA and the­ism (less friendly).”

I’m slightly tempted to, be­cause that ar­ti­cle is sloppy and un­fo­cused enough that it an­noys me, even though it’s broadly ac­cu­rate. (I mean, ‘the stan­dard statis­ti­cal sys­tem for draw­ing con­clu­sions is, in essence, illog­i­cal’? Really?) But I don’t know what I’d have to add to it, re­ally, other than ba­si­cally whin­ing ‘it is so un­fair!’

I’ve been read­ing the Quan­tum Me­chan­ics se­quence, and I have a ques­tion about Many-Wor­lds. My un­der­stand­ing of MWI and the rest of QM is pretty much limited to the LW se­quence and a bit of Wikipe­dia, so I’m sure there will be no short­age of peo­ple here who have a bet­ter knowl­edge of it and can help me.

My ques­tion is this: why are the Born Prob­a­bil­ites a prob­lem for MWI?

I’m sure it’s a very difficult prob­lem, I think I just fail to un­der­stand the im­pli­ca­tions of some step along the way. FWIW, my un­der­stand­ing of the Born Prob­a­bil­ities mainly clicks here:

If a whole gi­gan­tic hu­man ex­per­i­menter made up of quin­til­lions of par­ti­cles,

In­ter­acts with one teensy lit­tle atom whose am­pli­tude fac­tor has a big bulge on the >left and a small bulge on the right,

Then the re­sult­ing am­pli­tude dis­tri­bu­tion, in the joint con­figu­ra­tion space,

Has a big am­pli­tude blob for “hu­man sees atom on the left”, and a small am­pli­tude >blob of “hu­man sees atom on the right”.

And what that means, is that the Born prob­a­bil­ities seem to be about find­ing >your­self in a par­tic­u­lar blob, not the par­ti­cle be­ing in a par­tic­u­lar place.

Firstly, I know prob­a­bil­ity is the wrong word, but I’m go­ing to use it here, in­suffi­ciently, in the same way that it’s nor­mally in­suffi­ciently used to talk about QM. I sure hope that’s okay be­cause it is a pain to nail down in English.

So… If a quan­tum event has a 30% chance of go­ing LEFT and a 70% chance of go­ing right (which you could ob­serve with­out en­tan­gling your­self, for ex­am­ple by blast­ing a whole bunch of pho­tons through slits and see­ing the over­all den­sity pat­tern with­out mea­sur­ing in­di­vi­d­ual pho­tons) (I think), then if you en­tan­gle your­self with a sin­gle in­stance of it, you’ll have a 30% prob­a­bil­ity of ob­serv­ing LEFT and a 70% prob­a­bil­ity of ob­serv­ing RIGHT.

So why is this sur­pris­ing? Ob­vi­ously if we’re just count­ing ob­servers then we would ex­pect a 50⁄50 prob­a­bil­ity spread, but I as­sume the prob­lem isn’t that naive. Ob­vi­ously if the par­ti­cles them­selves ex­hibit a 30⁄70 prefer­ence, then we, be­ing made of par­ti­cles, should ex­pect to do the same. Or… if the par­ti­cles them­selves can ex­ist along a (psuedo)prob­a­bil­ity con­tinuum, then why should we, the en­ta­gled, not ex­pect to do the same? If those quarks are 70⁄30, then why aren’t yours? Why should MWI nec­es­sar­ily im­ply the sud­den cre­ation of ex­actly 2 wor­lds with equal weight, as op­posed to just di­vid­ing ex­pe­rience, lo­cally and where nec­es­sary, into a weighted con­tinuum?

I think I’ll try this from an­other an­gle. MWI gets points for treat­ing peo­ple/​ob­servers as par­ti­cles, gov­erned by the same laws as ev­ery­thing else. But are we re­ally treat­ing our­selves equally if we don’t as­sume that we too fol­low this 30⁄70 split? It seems like this should be the de­fault as­sump­tion, the one re­quiring no ex­tra pos­tu­lates, that we di­vide up not into dis­crete wor­lds but along a weighted con­tinuum. Ob­vi­ously it’s eas­ier on our typ­i­cal con­cep­tion of con­cious­ness if we can just have the whole uni­verse split neatly in two, but that feels to me like putting the weird­ness where it log­i­cally be­longs (on our com­par­a­tively weak un­der­stand­ing of con­cious ex­pe­rience).

Hope this makes at least some since to some­one who can steer me in the right di­rec­tion. I’d ap­pre­ci­ate re­sponses as to where speci­fi­cally I’ve erred, as this will con­tinue to bug me un­til I see where ex­actly I went wrong. Thanks in ad­vance.

So… If a quan­tum event has a 30% chance of go­ing LEFT and a 70% chance of go­ing right . . . you’ll have a 30% prob­a­bil­ity of ob­serv­ing LEFT and a 70% prob­a­bil­ity of ob­serv­ing RIGHT.

So why is this sur­pris­ing?

The sur­pris­ing (or con­fus­ing, mys­te­ri­ous, what have you) thing is that quan­tum the­ory doesn’t talk about a 30% prob­a­bil­ity of LEFT and a 70% prob­a­bil­ity of RIGHT; what it talks about is how LEFT ends up with an “am­pli­tude” of 0.548 and RIGHT with an “am­pli­tude” of 0.837. We know that the ob­served prob­a­bil­ity ends up be­ing the square of the ab­solute value of the am­pli­tude, but we don’t know why, or how this even makes sense as a law of physics.

This might be old news to ev­ery­one “in”, or just plain ob­vi­ous, but a cou­ple days ago I got Vladimir Nesov to ad­mit he doesn’t ac­tu­ally know what he would do if faced with his Coun­ter­fac­tual Mug­ging sce­nario in real life. The rea­son: if to­day (be­fore hav­ing seen any su­per­nat­u­ral crea­tures) we in­tend to re­ward Omegas, we will lose for cer­tain in the No-mega sce­nario, and vice versa. But we don’t know whether Omegas out­num­ber No-megas in our uni­verse, so the ques­tion “do you in­tend to re­ward Omega if/​when it ap­pears” is a bead jar guess.

The caveat is of course that Coun­ter­fac­tual Mug­ging or New­comb Prob­lem are not to be an­a­lyzed as situ­a­tions you en­counter in real life: the ar­tifi­cial el­e­ments that get in­tro­duced are speci­fied ex­plic­itly, not by an up­date from sur­pris­ing ob­ser­va­tion. For ex­am­ple, the con­di­tion that Omega is trust­wor­thy can’t be cred­ibly ex­pected to be ob­served.

The thought ex­per­i­ments ex­plic­itly de­scribe the en­vi­ron­ment you play your part in, and your knowl­edge about it, the state of things that is much harder to achieve through a se­quence of real-life ob­ser­va­tions, by up­dat­ing your cur­rent knowl­edge.

I dunno, New­comb’s Prob­lem is of­ten pre­sented as a situ­a­tion you’d en­counter in real life. You’re sup­posed to be­lieve Omega be­cause it played the same game with many other peo­ple and didn’t make mis­takes.

In any case I want a de­ci­sion the­ory that works on real life sce­nar­ios. For ex­am­ple, CDT doesn’t get con­fused by such ex­plo­sions of coun­ter­fac­tu­als, it works perfectly fine “lo­cally”.

ETA: My ar­gu­ment shows that mod­ify­ing your­self to never “re­gret your ra­tio­nal­ity” (as Eliezer puts it) is im­pos­si­ble, and mod­ify­ing your­self to “re­gret your ra­tio­nal­ity” less rather than more re­quires elic­i­ta­tion of your prior with hu­manly im­pos­si­ble ac­cu­racy (as you put it). I think this is a big deal, and now we need way more con­vinc­ing prob­lems that would mo­ti­vate re­search into new de­ci­sion the­o­ries.

If you do pre­sent ob­ser­va­tions that move the be­liefs to rep­re­sent the thought ex­per­i­ment, it’ll work just as well as the mag­i­cally con­trived thought ex­per­i­ment. But the ab­sence of rele­vant No-megas is part of the set­ting, so it too should be a con­clu­sion one draws from those ob­ser­va­tions.

Yes, but you must make the pre­com­mit­ment to love Omegas and hate No-megas (or vice versa) be­fore you re­ceive those ob­ser­va­tions, be­cause that pre­com­mit­ment of yours is ex­actly what they’re judg­ing. (I think you see that point already, and we’re prob­a­bly ar­gu­ing about some minor mi­s­un­der­stand­ing of mine.)

You never have to de­cide in ad­vance, to pre­com­mit. Precom­mit­ment is use­ful as a sig­nal to those that can’t fol­low your full thought pro­cess, and so you re­place it with a sim­ple rule from some point on (“you’ve already de­cided”). For Omegas and No-megas, you don’t have to pre­com­mit, be­cause they can fol­low any thought pro­cess.

I thought about it some more and I think you’re ei­ther con­fused some­where, or mis­rep­re­sent­ing your own opinions. To clear things up let’s con­vert the whole prob­lem state­ment into ob­ser­va­tional ev­i­dence.

Sce­nario 1: Omega ap­pears and gives you con­vinc­ing proof that Up­silon doesn’t ex­ist (and that Omega is trust­wor­thy, etc.), then pre­sents you with CM.

Sce­nario 2: Up­silon ap­pears and gives you con­vinc­ing proof that Omega doesn’t ex­ist, then pre­sents you with anti-CM, tak­ing into ac­count your coun­ter­fac­tual ac­tion if you’d seen sce­nario 1.

You wrote: “If you do pre­sent ob­ser­va­tions that move the be­liefs to rep­re­sent the thought ex­per­i­ment, it’ll work just as well as the mag­i­cally con­trived thought ex­per­i­ment.” Now, I’m not sure what this sen­tence was sup­posed to mean, but it seems to im­ply that you would give up $100 in sce­nario 1 if faced with it in real life, be­cause re­ceiv­ing the ob­ser­va­tions would make it “work just as well as the thought ex­per­i­ment”. This means you lose in sce­nario 2. No?

Omega would need to con­vince you that Up­silon not just doesn’t ex­ist, but couldn’t ex­ist, and that’s in­con­sis­tent with sce­nario 2. Other­wise, you haven’t moved your be­liefs to rep­re­sent the thought ex­per­i­ment. Up­silon must be ac­tu­ally im­pos­si­ble (less prob­a­ble) in or­der for it to be pos­si­ble for Omega to cor­rectly con­vince you (with­out de­cep­tion).

Be­ing up­date­less, your de­ci­sion al­gorithm is only in­ter­ested in ob­ser­va­tions so far as they re­solve log­i­cal un­cer­tainty and say which situ­a­tions you ac­tu­ally con­trol (again, a sort of log­i­cal un­cer­tainty), but ob­ser­va­tions can’t re­fute log­i­cally pos­si­ble, so they can’t make Up­silon im­pos­si­ble if it wasn’t already im­pos­si­ble.

Omega would need to con­vince you that Up­silon not just doesn’t ex­ist, but couldn’t ex­ist, and that’s in­con­sis­tent with sce­nario 2.

No it’s not in­con­sis­tent. Coun­ter­fac­tual wor­lds don’t have to be iden­ti­cal to the real world. You might as well say that Omega couldn’t have simu­lated you in the coun­ter­fac­tual world where the coin came up heads, be­cause that world is in­con­sis­tent with the real world. Do you be­lieve that?

By “Up­silon couldn’t ex­ist”, I mean that Up­silon doesn’t live in any of the pos­si­ble wor­lds (or only in in­signifi­cantly few of them), not that it couldn’t ap­pear in the pos­si­ble world where you are speak­ing with Omega.

The con­ven­tion is that the pos­si­ble wor­lds don’t log­i­cally con­tra­dict each other, so two differ­ent out­comes of coin tosses ex­ist in two slightly differ­ent wor­lds, both of which you care about (this situ­a­tion is not log­i­cally in­con­sis­tent). If Up­silon lives on such a differ­ent pos­si­ble world, and not on the world with Omega, it doesn’t make Up­silon im­pos­si­ble, and so you care what it does. In or­der to repli­cate Coun­ter­fac­tual Mug­ging, you need the pos­si­ble wor­lds with Up­silons to be ir­rele­vant, and it doesn’t mat­ter that Up­silons are not in the same world as the Omega you are talk­ing to.

(How to cor­rectly perform coun­ter­fac­tual rea­son­ing on con­di­tions that are log­i­cally in­con­sis­tent (such as the pos­si­ble ac­tions you could make that are not your ac­tual ac­tion), or rather how to math­e­mat­i­cally un­der­stand that rea­son­ing is the sep­til­lion dol­lar ques­tion.)

Ah, I see. You’re say­ing Omega must prove to you that your prior made Up­silon less likely than Omega all along. (By the way, this is an in­ter­est­ing way to look at modal logic, I won­der if it’s pub­lished any­where.) This is a very tall or­der for Omega, but it does make the two sce­nar­ios log­i­cally in­con­sis­tent. Un­less they in­volve “de­cep­tion”—e.g. Omega tweak­ing the mind of coun­ter­fac­tual-you to be­lieve a false proof. I won­der if the prob­lem still makes sense if this is al­lowed.

Surely the last thing on any­one’s mind, hav­ing been per­suaded they’re in the pres­ence of Omega in real life, is whether or not to give $100 :)

I like the No-mega idea (it’s similar to a re­fu­ta­tion of Pas­cal’s wa­ger by in­vok­ing con­trary gods), but I wouldn’t raise my ex­pec­ta­tion for the num­ber of No-mega en­coun­ters I’ll have by very much upon en­coun­ter­ing a soli­tary Omega.

Gen­er­al­iz­ing No-mega to in­clude all sorts of var­i­ants that re­ward stupid or per­verse be­hav­ior (are there more pos­si­ble God-likes that re­ward things strange and alien to us?), I’m not in the least bit con­cerned.

I sup­pose it’s just a good ar­gu­ment not to make plans for your life on the ba­sis of imag­ined God-like be­ings. There should be as many gods who, when pleased with your ac­tion, in­ter­vene in your life in a way you would not con­sider pleas­ant, and are pleased at things you’d con­sider ar­bi­trary, as those who have similar val­ues they’d like us to ex­press, and/​or ac­tu­ally re­ward us co­paceti­cally.

I wouldn’t raise my ex­pec­ta­tion for the num­ber of No-mega en­coun­ters I’ll have by very much upon en­coun­ter­ing a soli­tary Omega.

You don’t have to. Both Omega and No-mega de­cide based on what your in­ten­tions were be­fore see­ing any su­per­nat­u­ral crea­tures. If right now you say “I would give money to Omega if I met one”—fac­tor­ing in all be­lief ad­just­ments you would make upon see­ing it—then you should say the re­verse about No-mega, and vice versa.

ETA: Listen, I just had a funny idea. Now that we have this nifty weapon of “ex­plod­ing coun­ter­fac­tu­als”, why not ap­ply it to New­comb’s Prob­lem too? It’s an im­prob­a­ble enough sce­nario that we can make up a similarly im­prob­a­ble No-mega that would re­ward you for coun­ter­fac­tual two-box­ing. Damn, this tech­nique is too pow­er­ful!

By not be­liev­ing No-mega is prob­a­ble just be­cause I saw an Omega, I mean that I plan on con­sid­er­ing such situ­a­tions as they arise on the ba­sis that only the types of godlike be­ings I’ve seen to date (so far, none) ex­ist. I’m in­clined to say that I’ll de­cide in the way that makes me hap­piest, pro­vided I be­lieve that the godlike be­ing is hon­est and re­ally can know my pre­com­mit­ment.

I re­al­ize this leaves me vuln­er­a­ble to the first godlike huck­ster offer­ing me a de­cent ex­clu­sive deal; I guess this im­plies that I think I’m much more likely to en­counter 1 godlike be­ing than many.

You haven’t con­sid­ered the full ex­tent of the dam­age. What is your prior over all crazy mind-read­ing agents that can re­ward or pun­ish you for ar­bi­trary coun­ter­fac­tual sce­nar­ios? How can you be so sure that it will bal­ance in fa­vor of Omega in the end?

In fact, I can con­sider all crazy mind-read­ing re­ward/​pun­ish­ment agents at once: For ev­ery such hy­po­thet­i­cal agent, there is its hy­po­thet­i­cal dual, with the op­po­site be­hav­ior with re­spect to my sta­tus as be­ing coun­ter­fac­tu­ally-mug­gable (the one re­ward­ing what the other pun­ishes, and vice versa). Every such agent is the dual of its own dual; in the uni­ver­sal prior, be­ing ap­proached by an agent is about as likely as be­ing ap­proached by its dual; and I don’t think I have any ev­i­dence that one agent will be more likely to ap­pear than its dual. Thus, my to­tal ex­pected pay­off from these agents is 0.

Omega it­self does not be­long to this class of agent; it has no dual. (ETA: It has a dual, but the dual is a de­cep­tive Omega, which is much less prob­a­ble than Omega. See be­low.) So Omega is the only one I should worry about.

I should add that I feel a lit­tle un­easy be­cause I can’t prove that these in­finites­i­mal pri­ors don’t dom­i­nate ev­ery­thing when the sym­me­try is bro­ken, es­pe­cially when the stakes are high.

Okay, I’ll be more ex­plicit: I am con­sid­er­ing the class of agents who be­have one way if they pre­dict you’re mug­gable and be­have an­other way if they pre­dict you’re un­mug­gable. The dual of an agent be­haves ex­actly the same as the origi­nal agent, ex­cept the be­hav­iors are re­versed. In sym­bols:

An agent A has two be­hav­iors.

It it pre­dicts you’d give Omega $5, it will ex­hibit be­hav­ior X; oth­er­wise, it will ex­hibit be­hav­ior Y.

If it pre­dicts you’d give Omega $5, it will flip a coin and give you $100 on heads; oth­er­wise, noth­ing. In ei­ther case, it will tell you the rules of the game.

What would Omega* be?

If Omega pre­dicts you’d give Omega $5, it will do noth­ing. Other­wise, it will flip a coin and give you $100 on heads. In ei­ther case, it will as­sure you that it is Omega, not Omega.

So the dual of Omega is some­thing that looks like Omega but is in fact de­cep­tive. By hy­poth­e­sis, Omega is trust­wor­thy, so my prior prob­a­bil­ity of en­coun­ter­ing Omega* is neg­ligible com­pared to meet­ing Omega.

(So yeah, there is a dual of Omega, but it’s much less prob­a­ble than Omega.)

Then, when I calcu­late ex­pected util­ity, each agent A is bal­anced by its dual A , but Omega is not bal­anced by Omega.

If we as­sume you can tell “de­cep­tive” agents from “non-de­cep­tive” ones and shift prob­a­bil­ity weight ac­cord­ingly, then not ev­ery agent is bal­anced by its dual, be­cause some “de­cep­tive” agents prob­a­bly have “non-de­cep­tive” du­als and vice versa. No?

The rea­son we shift prob­a­bil­ity weight away from the de­cep­tive Omega is that, in the origi­nal prob­lem, we are told that we be­lieve Omega to be non-de­cep­tive. The rea­son­ing goes like this: If it looks like Omega and talks like Omega, then it might be Omega or Omega . But if it were Omega* , then it would be de­ceiv­ing us, so it’s most prob­a­bly Omega.

In the origi­nal prob­lem, we have no rea­son to be­lieve that No-mega and friends are non-de­cep­tive.

(But if we did, then yes, the dual of a non-de­cep­tive agent would be de­cep­tive, and so have lower prior prob­a­bil­ity. This would be a differ­ent prob­lem, but it would still have a sym­me­try: We would have to define a differ­ent no­tion of dual, where the dual of an agent has the re­versed be­hav­ior and also re­verses its claims about its own be­hav­ior.

What would Omega* be in that case? It would not claim to be Omega. It would truth­fully tell you that if it pre­dicted you would not give it $5 on tails, then it would flip a coin and give you $100 on heads; and oth­er­wise it would not give you any­thing. This has no bear­ing on your de­ci­sion in the Omega prob­lem.)

By your defi­ni­tions, Omega would con­di­tion its de­ci­sion on you be­ing coun­ter­fac­tu­ally mug­gable by the origi­nal Omega, not on you giv­ing money to Omega it­self. Or am I los­ing the plot again? This no­tion of “du­al­ity” seems to be get­ting more and more com­plex.

“Dual­ity” has be­come more com­plex be­cause we’re now talk­ing about a more com­plex prob­lem — a ver­sion of Coun­ter­fac­tual Mug­ging where you be­lieve that all su­per­in­tel­li­gent agents are trust­wor­thy. The old ver­sion of du­al­ity suffices for the or­di­nary Coun­ter­fac­tual Mug­ging prob­lem.

My the­sis is that there’s always a sym­me­try in the space of black swans like No-mega.

In the case cur­rently un­der con­sid­er­a­tion, I’m as­sum­ing Omega’s spiel goes some­thing like “I just flipped a coin. If it had been heads, I would have pre­dicted what you would do if I had ap­proached you and given my spiel....” No­tice the use of first-per­son pro­nouns. Omega* would have al­most the same spiel ver­ba­tim, also us­ing first-per­son pro­nouns, and make no refer­ence to Omega. And, be­ing non-de­cep­tive, it would be­have the way it says it does. So it wouldn’t con­di­tion on your be­ing mug­gable by Omega.

You could ob­ject to this by claiming that Omega ac­tu­ally says “I am Omega. If Omega had come up to you and said....”, in which case I can come up with a third no­tion of du­al­ity.

If Omega* makes no refer­ence to the origi­nal Omega, I don’t un­der­stand why they have “op­po­site be­hav­ior with re­spect to my sta­tus as be­ing coun­ter­fac­tu­ally-mug­gable” (by the origi­nal Omega), which was your rea­son for in­vent­ing “du­al­ity” in the first place. I apol­o­gize, but at this point it’s un­clear to me that you ac­tu­ally have a proof of any­thing. Maybe we can take this dis­cus­sion to email?

This is an overview of his self-ex­per­i­ments (to im­prove his mood and sleep, and to lose weight), with ar­gu­ments that self-ex­per­i­men­ta­tion, es­pe­cially on the brain, is re­mark­ably effec­tive in find­ing use­ful, im­plau­si­ble, low-cost im­prove­ments in qual­ity of life, while in­sti­tu­tional sci­ence is not.

There’s a lot about sta­tus and sci­ence (it took Roberts 10 years to start get­ting re­sults, and it’s just to risky to ca­reers for sci­en­tists to take on pro­jects which last that long), and some in­trigu­ing the­ory at the end that ac­tivi­ties can be clas­sified into ex­ploita­tion (low risk, low re­ward) and ex­plo­ra­tion (high risk, high re­ward), and that peo­ple aren’t apt to want to do ex­plo­ra­tion full time, so, if given a job that’s full-time ex­plo­ra­tion (like in­sti­tu­tional sci­ence), they’ll turn most of it into ex­ploita­tion.

After more-or-less suc­cess­fully avoid­ing it for most of LW’s his­tory, we’ve plunged head­long into mind-kil­ler ter­ri­tory. I’m a lit­tle bit wor­ried, and I’m in­trigued to find out what long-time LWers, es­pe­cially those who’ve been hes­i­tant about ven­tur­ing that di­rec­tion, ex­pect to see as a re­sult over the next month or two.

It doesn’t look en­courag­ing. The dis­cus­sions just don’t con­verge, they me­an­der all over the place and leave no crys­tal­line resi­due of cor­rect an­swers. (Achieve­ment un­locked: Mixed Me­taphor)

It is prob­le­matic but nec­es­sary, in my opinion. Poli­tics IS the mind-kil­ler, but poli­tics DOES mat­ter. Avoid­ing the topic would seem to be an ad­mis­sion that this ra­tio­nal­ity thing is re­ally just a pretty toy.

I think a cur­rent policy de­bate has po­ten­tial for bet­ter re­sults, since it would offer the po­ten­tial for bet­ting, and avoid some of the self-iden­ti­fi­ca­tion and loy­alty that’s hard to avoid when ap­ply­ing a model as sim­ple as a poli­ti­cal philos­o­phy to some­thing as com­plex as hu­man cul­ture.

Since we’ve had some dis­cus­sion about ad­di­tions/​mod­ifi­ca­tions to the site, and LW—as I un­der­stand it—was a origi­nally a sort of spin-off from OB, maybe ad­di­tion of a karma-based pre­dic­tion mar­ket of some sort would be suit­able (and very in­ter­est­ing).

I think hav­ing such a low-stakes game to play would be benefi­cial not only to highly risk-averse in­di­vi­d­u­als, but to any­one. It would provide a use­ful train­ing ground (maybe even a com­pet­i­tive lad­der in a ra­tio­nal­ity dojo) for any­one who wants to also play with higher stakes el­se­where.

Edit: I’m cur­rently a mediocre pro­gram­mer (and in­tend to be­come good via some prac­tice). And while I don’t par­ti­ci­pate of­ten in the com­mu­nity (yet), this could be fun and ed­u­ca­tional enough that I would be will­ing to con­tribute a fairly sub­stan­tial amount of labour to it. If any­one with marginally more know-how is will­ing to im­ple­ment such an idea, let me know and I’ll join up.

My feel­ings on this are mixed. I’ve found LW to be a re­fresh­ing re­fuge from such quar­rels. On the other hand, with­out care­ful thought poli­ti­cal de­bates re­li­ably de­scend into mad­ness quickly, and it is not as if poli­tics is unim­por­tant. Per­haps tak­ing the men­tal tech­niques dis­cussed here to other fo­rums could im­prove the gen­er­ally atro­cious level of rea­son­ing usu­ally found in on­line poli­ti­cal dis­cus­sions, though I ex­pect the effect would be small.

Thought I might pass this along and file it un­der “failure of ra­tio­nal­ity”. Sadly, this kind of thing is in­creas­ingly com­mon—get­ting deep in ed­u­ca­tion debt, but not hav­ing in­creased earn­ing power to ser­vice the debt, even with a de­gree from a re­spected uni­ver­sity.

Sum­mary: Cort­ney Munna, 26, went $100K into debt to get worth­less de­grees and is defer­ring pay­ment even longer, mak­ing in­ter­est pile up fur­ther. She works in an un­re­lated area (pho­tog­ra­phy) for $22/​hour, and it doesn’t sound like she has a lot of job se­cu­rity.

We don’t find out un­til the end of the ar­ti­cle that her de­grees are in women’s stud­ies and re­li­gious stud­ies.

There are much bet­ter ways to spend $100K. Twen­tysome­things like her are filling up the work­force. I’m wor­ried about the fu­ture im­pli­ca­tions.

I thank my lucky stars I’m not in such a po­si­tion (in the re­spects listed in the ar­ti­cle—Munna’s prob­a­bly bet­ter off in other re­spects). I didn’t han­dle col­lege plan­ning as well as I could have, and I re­gret it to this day. But at least I didn’t go deep into debt for a worth­less de­gree.

What’s the sub­stan­tive differ­ence? In both cases, the young per­son has taken out a debt in­tended to am­plify earn­ings by more than the debt costs, but that isn’t go­ing to hap­pen. What does it mat­ter whether the de­gree was of “any use” or not? What mat­ters is whether it was enough use to cover the debt, not sim­ply if there ex­ist some gain in earn­ings due to the debt (which there prob­a­bly is, though only via sig­nal­ing, not di­rect en­hance­ment of hu­man cap­i­tal).

I agree with his broad points, but on many is­sues, I no­tice he of­ten per­ceives a world that I don’t seem to live in. For ex­am­ple, he says that peo­ple who can sim­ply com­mu­ni­cate in clear English and think clearly are in such short sup­ply that he’d hire some­one or take them on as a grad stu­dent sim­ply for meet­ing that, while I haven’t no­ticed the de­mand for my la­bor (as some­one well above and be­yond that) be­ing like what that kind of short­age would im­ply.

Se­cond, he seems to have this be­lief that the con­sumer credit scor­ing sys­tem can do no wrong. Back when I was un­able to get a mort­gage at prime rates due to lack­ing credit his­tory de­spite be­ing an ideal can­di­date [1], he claimed that the re­fusals were com­pletely jus­tified be­cause I must have been ir­re­spon­si­ble with credit (de­spite not hav­ing bor­rowed...), and he has no rea­son to be­lieve my self-serv­ing story … even af­ter I offered to send him my credit re­port and the re­fusals!

[1] I had no other debts, no de­pen­dents, no bad in­ci­dents on my credit re­port, sta­ble work his­tory from the largest pri­vate em­ployer in the area, and the mort­gage would be for less than 2x my in­come and have less than 1⁄6 of my gross in monthly pay­ments. Yeah, real sub­prime bor­rower there...

One rea­son why the be­hav­ior of cor­po­ra­tions and other large or­ga­ni­za­tions of­ten seems so ir­ra­tional from an or­di­nary per­son’s per­spec­tive is that they op­er­ate in a le­gal minefield. Dodg­ing the con­stant threats of law­suits and reg­u­la­tory penalties while still man­ag­ing to do pro­duc­tive work and turn a profit can re­quire poli­cies that would make no sense at all with­out these ar­tifi­cially im­posed con­straints. This fre­quently comes off as sheer ir­ra­tional­ity to com­mon peo­ple, who tend to imag­ine that big busi­nesses op­er­ate un­der a far more laissez-faire regime than they ac­tu­ally do.

More­over, there is the prob­lem of dis­ec­onomies of scale. Or­di­nary com­mon-sense de­ci­sion crite­ria—such as e.g. look­ing at your life his­tory as you de­scribe it and con­clud­ing that, given these facts, you’re likely to be a re­spon­si­ble bor­rower—of­ten don’t scale be­yond in­di­vi­d­u­als and small groups. In a very large or­ga­ni­za­tion, de­ci­sion crite­ria must in­stead be bu­reau­cratic and for­mal­ized in a way that can be, with rea­son­able cost, brought un­der tight con­trol to avoid wide­spread mis­be­hav­ior. For this rea­son, scal­able bu­reau­cratic de­ci­sion-mak­ing rules must be clear, sim­ple, and based on strictly defined cat­e­gories of eas­ily ver­ifi­able ev­i­dence. They will in­evitably end up pro­duc­ing at least some de­ci­sions that com­mon-sense pru­dence would rec­og­nize as silly, but that’s the cost of scal­a­bil­ity.

Also, it should be noted that these two rea­sons are not in­de­pen­dent. Con­sis­tent ad­her­ence to for­mal­ized bu­reau­cratic de­ci­sion-mak­ing pro­ce­dures is also a pow­er­ful defense against preda­tory plain­tiffs and reg­u­la­tors. If a com­pany can pro­duce pa­pers with clearly spel­led out rules for micro­manag­ing its busi­ness at each level, and these rules are per se con­sis­tent with the tan­gle of reg­u­la­tions that ap­ply to it and don’t give any grounds for law­suits, it’s much more likely to get off cheaply than if its em­ploy­ees are given broad lat­i­tude for com­mon-sense de­ci­sion-mak­ing.

For what it’s worth, the credit score sys­tem makes a lot more sense when you re­al­ize it’s not about eval­u­at­ing “this per­son’s abil­ity to re­pay debt”, but rather “ex­pected profit for lend­ing this per­son money at in­ter­est”.

Some­one who avoids car­ry­ing debt (e.g., pay­ing in­ter­est) is not a good rev­enue source any more than some­one who fails to pay en­tirely. The ideal lendee is some­one who re­li­ably and con­sis­tently makes pay­ment with a max­i­mal in­ter­est/​prin­ci­pal ra­tio.

This is an­other one of those Han­son-es­que “X is not about X-ing” things.

I think there’s also some Con­ser­va­tion of Thought (1) in­volved—if you have a credit his­tory to be looked at, there are Ac­tual! Records!. If some­one is just solvent and re­li­able and has a good job, then you have to eval­u­ate that.

There may also be a weird­ness fac­tor if rel­a­tively few peo­ple have no debt his­tory.

I think there’s also some Con­ser­va­tion of Thought (1) in­volved—if you have a credit his­tory to be looked at, there are Ac­tual! Records!. If some­one is just solvent and re­li­able and has a good job, then you have to eval­u­ate that.

Ex­cept that there are records (his­tory of pay­ing bills, rent), it’s just that the lenders won’t look at them.

There may also be a weird­ness fac­tor if rel­a­tively few peo­ple have no debt his­tory.

Maybe fi­nan­cial gu­rus should think about that be­fore they say “stay away from credit cards en­tirely”. It should be “You MUST get a credit card, but pay the bal­ance.” (This is an­other case of ad­dic­tive stuff that can’t ad­dict me.)

(Please, don’t bother with ad­vice, the prob­lem has since been solved; credit unions are run by non-idiots, it seems, and don’t make the above lender er­rors.)

ETA: Sorry for the snarky tone; your points are valid, I just dis­agree about their ap­pli­ca­bil­ity to this spe­cific situ­a­tion.

Ex­cept that there are records (his­tory of pay­ing bills, rent), it’s just that the lenders won’t look at them.

Well, is it re­ally pos­si­ble that lenders are so stupid that they’re miss­ing profit op­por­tu­ni­ties be­cause such straight­for­ward ideas don’t oc­cur to them? I would say that lack­ing in­sider in­for­ma­tion on the way they do busi­ness, the ra­tio­nal con­clu­sion would be that, for what­ever rea­sons, ei­ther they are not per­mit­ted to use these crite­ria, or these crite­ria would not be so good af­ter all if ap­plied on a large scale.

Well, is it re­ally pos­si­ble that lenders are so stupid that they’re miss­ing profit op­por­tu­ni­ties be­cause such straight­for­ward ideas don’t oc­cur to them? I would say that lack­ing in­sider in­for­ma­tion on the way they do busi­ness, the ra­tio­nal con­clu­sion would be that, for what­ever rea­sons, ei­ther they are not per­mit­ted to use such crite­ria, or such crite­ria would not be so good af­ter all if ap­plied on a large scale.

No, they do re­quire that in­for­ma­tion to get the sub­prime loan; it’s just that they clas­sified me as sub­prime based purely on the lack of credit his­tory, ir­re­spec­tive of that non-loan his­tory. Pro­vid­ing that in­for­ma­tion, though re­quired, doesn’t get you back into prime ter­ri­tory.

Or maybe the rea­son is that credit unions are op­er­at­ing un­der differ­ent le­gal con­straints and, be­ing smaller, they can af­ford to use less tightly for­mal­ized de­ci­sion-mak­ing rules?

Con­sid­er­ing that in the re­cent fi­nan­cial in­dus­try crisis, the credit unions vir­tu­ally never needed a bailout, while most of the large banks did, there is good sup­port for the hy­poth­e­sis of CU = non-idiot, larger banks/​mort­gage bro­kers = idiot.

(Of course, I do differ from the gen­eral sub­prime pop­u­la­tion in that if I see that I can only get bad terms on a mort­gage, I don’t ac­cept them.)

No, they do re­quire that in­for­ma­tion to get the sub­prime loan; it’s just that they clas­sified me as sub­prime based purely on the lack of credit his­tory, ir­re­spec­tive of that non-loan his­tory. Pro­vid­ing that in­for­ma­tion, though re­quired, doesn’t get you back into prime ter­ri­tory.

This merely means that their for­mal crite­ria for sort­ing out loan ap­pli­cants into offi­cially rec­og­nized cat­e­gories dis­al­low the use of this in­for­ma­tion—which would be fully con­sis­tent with my propo­si­tions from the above com­ments.

Mort­gage lend­ing, es­pe­cially sub­prime lend­ing, has been a highly poli­ti­cized is­sue in the U.S. for many years, and this busi­ness pre­sents an es­pe­cially dense and dan­ger­ous le­gal minefield. Mul­ti­far­i­ous poli­ti­ci­ans, bu­reau­crats, courts, and promi­nent ac­tivists have a stake in that game, and they have all been us­ing what­ever means are at their dis­posal to in­fluence the ma­jor lenders, whether by car­rots or by sticks. All this has un­doubt­edly in­fluenced the rules un­der which loans are handed out in prac­tice, mak­ing the bu­reau­cratic rules and pro­ce­dures of large lenders seem even more non­sen­si­cal from the com­mon per­son’s per­spec­tive than they would oth­er­wise be.

(I won’t get into too many speci­fics in or­der to avoid rais­ing con­tro­ver­sial poli­ti­cal top­ics, but I think my point should be clear at least in the ab­stract, even if we dis­agree about the con­crete de­tails.)

Con­sid­er­ing that in the re­cent fi­nan­cial in­dus­try crisis, the credit unions vir­tu­ally never needed a bailout, while most of the large banks did, which sup­ports the CU = idiot, larger banks/​mort­gage bro­kers = non-idiot hy­poth­e­sis.

Why do you as­sume that the bailouts are in­dica­tive of idiocy? You seem to be as­sum­ing that—roughly speak­ing—the ma­jor fi­nanciers have been en­gaged in more or less reg­u­lar mar­ket-econ­omy busi­ness and done a bad job due to stu­pidity and in­com­pe­tence. That, how­ever, is a highly in­ac­cu­rate model of how the mod­ern fi­nan­cial in­dus­try op­er­ates and its re­la­tion­ship with var­i­ous branches of the gov­ern­ment—in­ac­cu­rate to the point of use­less­ness.

I ac­tu­ally agree with most of those points, and I’ve made many such crit­i­cisms my­self. So per­haps larger banks are forced into a po­si­tion where they rely too much on credit scores at one stage. Still, credit unions won, de­spite hav­ing much less poli­ti­cal pull, while sig­nifi­cantly larger banks top­pled. Much as I dis­agree with the poli­cies you’ve de­scribed, some of the banks’ er­rors (like as­sump­tions about re­pay­ment rates) were bad, no mat­ter what gov­ern­ment policy is.

If lend­ing had re­ally been reg­u­lated to the point of (ex­pected) un­prof­ita­bil­ity, they could have got­ten out of the busi­ness en­tirely, per­haps spin­ning off mort­gage di­vi­sions as credit unions to take ad­van­tage of those laws. In­stead, they used their poli­ti­cal power to “dance with the devil”, never ad­just­ing for the re­sult­ing risks, ei­ther poli­ti­cal or in real es­tate. There’s stu­pidity in that some­where.

In some cases this was an ex­am­ple of the prin­ci­pal–agent prob­lem—the in­ter­ests of bank em­ploy­ees were not nec­es­sar­ily al­igned with the in­ter­ests of the share­hold­ers. Bank ex­ec­u­tives can ‘win’ even when their bank top­ples.

The ques­tion of whether an agent’s in­ter­ests are al­igned with the prin­ci­pal’s is largely or­thog­o­nal to the ques­tion of whether the agent achieves a pos­i­tive re­turn. The agent’s ex­pected re­turn is more rele­vant.

There were many agents in­volved in the re­cent fi­nan­cial un­pleas­ant­ness whose harm was en­abled by the prin­ci­pal-agent prob­lem. My in­tended ex­am­ples did not suffer that prob­lem. I could have made that clearer.

This might be what they say in their books, where they give a de­tailed fi­nan­cial plan, though I doubt even that. What they ad­vise is usu­ally di­rected at the av­er­age mouth­breather who gets deep into credit card debt. They don’d need to ad­vise such peo­ple to build a credit his­tory by get­ting a credit card solely for that pur­pose—that ship has already said!

All I ever hear from them is “Stay away from credit cards en­tirely! Those are a trap!” I had never once heard a caveat about, “oh, but make sure to get one any­way so you don’t find your­self at 24 with­out a credit his­tory, just pay the bal­ance.” No, for most of what they say to make sense, you have to start from the as­sump­tion that the listener typ­i­cally doesn’t pay the full bal­ance, and is some­how en­light­ened by mov­ing to such a policy.

No­tice how the cita­tion you give is from a chap­ter-length treat­ment from a less-known fi­nance guru (than Ram­sey, Or­man, Howard, etc.), and it’s about “op­ti­miz­ing credit cards” a kind of com­plex, niche strat­egy. Not stan­dard, gen­eral ad­vice from a house­hold name.

All I ever hear from them is “Stay away from credit cards en­tirely! Those are a trap!”

That would be an in­sanely stupid thing for any­one to say. Credit cards are very use­ful if used prop­erly. I agree with mat­tnew­port that the stan­dard ad­vice given in fi­nan­cial books is to charge a small amount ev­ery month to build up a credit rat­ing. Also, charge large pur­chases at the best in­ter­est rate you can find when you’ll use the pur­chases over time and you have a bud­get that will al­low you to pay them off.

Well, then I don’t know what to tell you. I’d listened to fi­nan­cial ad­vice shows on and off and had read Clark Howard’s book be­fore ap­ply­ing for the mort­gage back then, and never once did I hear or read that you should get a credit card merely to es­tab­lish a credit his­tory (and this is not why they is­sue them). I sus­pect it’s be­cause their ad­vice be­gins from the as­sump­tion that you’re in credit card debt, and you need to get out of that first, “you bozo”.

And your com­ment about the use­ful­ness of credit cards for bor­row­ing is a bit ivory-tower. In ac­tual ex­pe­rience, based on all the ex­pose re­ports and news sto­ries I’ve seen, it’s pretty much im­pos­si­ble to do that kind of plan­ning, since credit card com­pa­nies re­serve the right to make ar­bi­trary changes to the terms—and use that right.

I re­mem­ber one case where a bank is­sued a card that had a “guaran­teed” 1.9% rate for ~6 months with a ~$5000 limit—but if you ac­tu­ally used any­thing ap­proach­ing that limit, they would in­voke the credit risk clauses of the agree­ment, deem you a high risk be­cause of all the debt you’re car­ry­ing, and jack up your rate to over 20%. So, a 1.9% loan that they can im­me­di­ately change to 20% if they feel like it—in what sense was it a 1.9% loan?

For that rea­son, I don’t even con­sider us­ing a credit card for in­stal­l­ment pur­chases.

Wow, they can jack up the rate like that? I would definitely con­sider that fraud and abuse. That’s not com­mon, how­ever, and Congress re­cently passed leg­is­la­tion to pre­vent that sort of abuse. Cur­rently, I don’t have the op­tion of not us­ing a credit card; I would starve to death with­out it.

Wow, they can jack up the rate like that? I would definitely con­sider that fraud and abuse. That’s not com­mon …

I thought so too, but then was over­whelmed with sto­ries like that. Most credit cards agree­ments are writ­ten with a clause that says, “we can do what­ever we want, and the most you can do to re­ject the new terms is pay off the en­tire debt in 15 days”. This is one of the few in­stances where courts will honor a con­tract that gives one party such open-ended power over the other.

If you haven’t been burned this way, it’s just a mat­ter of time.

And if you google the topic, I’m sure you’ll find enough to satisfy your ev­i­dence thresh­old.

Cur­rently, I don’t have the op­tion of not us­ing a credit card; I would starve to death with­out it.

Would you starve to death with it? If you can ser­vice the debts, let me loan you the money; at this point, most in­vestors would sell out their mother to get a frac­tion of the in­ter­est rate on their sav­ings that most credit cards charge. (Not that I would, but I’d turn down the offer with­out my trade­mark rude­ness...)

For what that’s worth, when I quit smok­ing, I didn’t feel any with­drawal symp­toms ex­cept be­ing a bit ner­vous and ir­ri­ta­ble for a sin­gle day (and I’m not even sure if quit­ting was the cause, since it co­in­cided with some stress­ful is­sues at work that could well have caused it re­gard­less). That was af­ter a few years of smok­ing some­thing like two packs a week on av­er­age (and much more than that dur­ing holi­days and other pe­ri­ods when I went out a lot).

From my ex­pe­rience, as well as what I ob­served from sev­eral peo­ple I know very well, most of what is nowa­days widely be­lieved about ad­dic­tion is a myth.

Yes, I would think it would take around 5-10 cigarettes a day (or more) for at least a week to de­velop an ad­dic­tion. While cigarettes (and heroin, and caf­feine) are very phys­i­cally ad­dic­tive, it still takes sus­tained, mod­er­ately high use to de­velop a phys­i­cal ad­dic­tion. Most cigarette smok­ers de­scribe their ad­dic­tions in terms of “x packs per day”.

From what I can see be­fore the pay­wall, it looks like I definitely didn’t meet the thresh­old un­der the best sci­ence, but I could prob­a­bly cross it from 5 cigarettes per day. I’d only try that out if I were re­warded for do­ing it (but not for stop­ping as that would defeat the pur­pose of such an ex­pe­rience).

I read the ar­ti­cle on pa­per be­fore it was hid­den in a pay­wall, so I can sum­ma­rize some of the find­ings:

1) Rat brains are ir­re­vo­ca­bly changed by a sin­gle dose of nico­tine.

2) Brains of rats that have never been ex­posed to nico­tine (“non-smok­ers”), those that are cur­rently given nico­tine on a reg­u­lar ba­sis (“cur­rent smok­ers”), and those that used to be given nico­tine on a reg­u­lar ba­sis but have been de­prived of it for a long time (“former smok­ers”) are all dis­t­in­guish­able from each other.

3) The au­thor notes that the pri­mary effect of nico­tine on ad­dicted hu­man smok­ers ap­pears to be sup­press­ing crav­ing for it­self.

4) The au­thor hy­poth­e­sizes that the brain has a crav­ing-gen­er­at­ing sys­tem and a sep­a­rate crav­ing-sup­pres­sion sys­tem. (Th­ese sys­tems ap­ply to ap­petites in gen­eral, such as the de­sire to eat food.) He fur­ther goes on to spec­u­late that the pri­mary ac­tion of nico­tine is to sup­press crav­ing. This has the effect of throw­ing the two sys­tems out of equil­ibrium, so the brain’s crav­ing-gen­er­a­tion sys­tem “works harder” to counter the effects of nico­tine. When the effects of nico­tine wear off (which can take much longer than the time it takes for the nico­tine to leave the body), the equil­ibrium is once again thrown out of bal­ance, re­sult­ing in crav­ings. (The effects of smok­ing on weight are men­tioned as sup­port for this hy­poth­e­sis.)

For what it’s worth, the credit score sys­tem makes a lot more sense when you re­al­ize it’s not about eval­u­at­ing “this per­son’s abil­ity to re­pay debt”, but rather “ex­pected profit for lend­ing this per­son money at in­ter­est”.

Ex­pected profit ex­plains much be­hav­ior of credit card com­pa­nies, but I don’t think it helps at all with the be­hav­ior of the credit score sys­tem or mort­gage lenders (Silas’s ex­am­ple!). Nancy’s an­swer looks much bet­ter to me (ex­cept her use of the word “also”).

Also, more speci­fi­cally but less gen­er­ally rele­vant to LW; as a per­son be­ing pres­sured to make use of psy­cholog­i­cal ser­vices, are there any ra­tio­nal­ist psy­chol­o­gists in the Den­ver, CO area?

Do they re­ally have such a poor track record? I know some sci­en­tists have very lit­tle re­spect for the “soft” sci­ences, but so­ciol­o­gist can at least make gen­er­al­iza­tions from stud­ies done on large scales. Psy­chother­apy makes a lot of peo­ple in­cre­d­u­lous, but iis it re­ally fair to say that most meth­ods in prac­tice to­day are ~0% effec­tive?

Yes this is es­sen­tially a post stat­ing my in­cre­dulity. Would you mind quel­ling it?

Scien­tific medicine is difficult and ex­pen­sive. I worry that the ap­par­ent suc­cess of CBT may be be­cause method­olog­i­cal com­pro­mises needed to make the re­search prac­ti­cal hap­pen to flat­ter CBT more than they flat­ter other ap­proaches.

I might be wor­ry­ing about the wrong thing. Do we know any­thing about the use­ful­ness of Prozac in treat­ing de­pres­sion? Since we turn a blind eye to the un­blind­ing of all our stud­ies by the sex­ual side-effects of Prozac, and also re­fuse to con­sider the di­rect im­pact of those side-effects it could be ar­gued that we don’t ac­tu­ally have any sci­en­tific knowl­edge of the effec­tive­ness of the drug.

The claim I’ve seen as­so­ci­ated with Robyn Dawes is that ther­apy is use­ful (which I read as “more use­ful than be­ing on a wait­ing list”), but that un­trained ther­a­pists are just as good as those trained un­der most meth­ods. (ETA: and, con­trary to Kevin, they have been tested and found want­ing)

It’s not that other forms of psy­chother­apy are sci­en­tifi­cally shown to be 0% effec­tive; it’s just that ev­i­dence-based psy­chother­apy is a sur­pris­ingly re­cent field. Psy­chother­apy can still work even if some fields of it have not had rigor­ous stud­ies show­ing their effec­tive­ness… but you might as well go with a ther­a­pist that has train­ing in a field of psy­chother­apy that has some sci­en­tific method be­hind it.

I can’t help you with the Den­ver area in par­tic­u­lar, but the gen­eral an­swer is a definite yes. In an in­ter­est­ing jux­ta­po­si­tion, Amer­i­can Psy­chol­o­gist mag­a­z­ine had a re­cent is­sue promi­nently fea­tur­ing dis­cus­sion of how to get past the mi­suse of statis­tics dis­cussed in this very LW open thread. And it’s not the first time the mag­a­z­ine ad­dressed the point.

I think Learn­ing Meth­ods is a more so­phis­ti­cated ra­tio­nal­ist ap­proach than CBT

In­ter­est­ing. I found the site to be not very helpful, un­til I hit this page, which strongly sug­gests that at least one thing peo­ple are learn­ing from this train­ing is the prac­ti­cal ap­pli­ca­tion of the Mind Pro­jec­tion Fal­lacy:

Was the movie good or bad? If you an­swer BOTH, think it through. In a fac­tual sense, can the same movie be good AND bad? If it’s good, how can it be bad? The only way to make sense of a movie be­ing both good and bad is to re­al­ize that the good­ness and bad­ness does not ex­ist IN the movie, but IN Jack and IN Jill as a re­flec­tion of how the movie matches their in­di­vi­d­ual crite­ria.

The quote is from an ar­ti­cle writ­ten by an LM stu­dent, and some in­sights from the learn­ing pro­cess that helped her over­come her stage fright.

IOW, at least one as­pect of LM sounds a bit like “ra­tio­nal­ity dojo” to me (in the sense that here’s an or­di­nary per­son with no spe­cial in­ter­est in ra­tio­nal­ism, giv­ing a beau­tiful (and more de­tailed than I quoted here) ex­pla­na­tion of the Mind Pro­jec­tion Fal­lacy, based on her prac­ti­cal ap­pli­ca­tions of it in ev­ery­day life .

(Bias dis­claimer: I might be pos­i­tively in­clined to what I’m read­ing be­cause some of it re­sem­bles or is read­ily trans­lat­able to as­pects of my own mod­els. Another ar­ti­cle that I’m in the mid­dle of read­ing, for ex­am­ple, talks about the im­por­tance of ad­dress­ing the ori­gins of non­con­sciously-trig­gered men­tal and phys­i­cal re­ac­tions, vs. con­sciously over­rid­ing symp­toms—an­other ap­proach I per­son­ally fa­vor.)

Delight­ful, and has a nice break­down of the sort of ques­tions to ask your­self (what ex­actly is the prob­lem, how much pre­ci­sion is ac­tu­ally needed, what is the con­di­tion of the tools, etc.) if you want to get things done effi­ciently.

a ~20 minute (ab­solutely worth ev­ery minute) in­ter­view with, Dr. Robert Sapolsky, a lead­ing re­searcher in the study of Tox­o­plasma & its effects on hu­mans. This is a must see. Also, to­wards the end there is dis­cus­sion of the effect of stress on telomere short­en­ing. Fas­ci­nat­ing stuff.

If your de­sires are in­fluenced by par­a­sites, then the par­a­sites are part of what makes you you. You may as well ask “If peo­ple’s de­sires are in­fluenced by their past ex­pe­rience, what does that do to CEV?” or “If peo­ple’s de­sires are in­fluenced by their brain chem­istry, what does that do to CEV?”

So what if Dr. Evil re­leases a par­a­site that rewires hu­man­ity’s brains in a pre­de­ter­mined man­ner? Should CEV take that into ac­count or should it aim to be­come Co­her­ent Ex­trap­o­lated Dis­in­fected Vo­li­tion?

Yep, I made a refer­ence to cul­tural in­fluence here. That’s why I sus­pect CEV should be ap­plied uniformly to the iden­tity-space of all pos­si­ble hu­mans rather than the sub­set of hu­mans that hap­pen to ex­ist when it gets ap­plied. In that case defin­ing hu­man­ity be­comes very, very im­por­tant.

Of course, per­haps the cur­rent for­mu­la­tion of CEV cov­ers the en­tire iden­tity-space equally and treats the liv­ing pop­u­la­tion as a sam­ple, and I have mi­s­un­der­stood. But if that is the case, Wei Dai’s last ar­ti­cle is also bunk, and I trust him to have bet­ter un­der­stand­ing of all things FAI than my­self.

Heh—my first in­stinct is to bite the bul­let and ap­ply CEV to ex­ist­ing hu­mans only. I couldn’t give a strong ar­gu­ment for that, though; I just can’t im­me­di­ately think of a rea­son to ex­clude non-cul­turally in­fluenced hu­mans while in­clud­ing cul­turally in­fluenced hu­mans.

I’ll give it a try. A hu­man’s mind and prefer­ences might be in­fluenced by cul­tural things like books and TV, and they might be in­fluenced by non-cul­tural things like par­a­sites. (And of course a lot of peo­ple will be in­fluenced by both.) I can’t think of a rea­son to in­clude the former in CEV and ex­clude the lat­ter that feels non-ar­bi­trary to me, so I don’t feel as if par­a­siti­cally mod­ified brains war­rant differ­ent treat­ment, such as al­ter­ing CEV to cover the space of all pos­si­ble hu­mans. My gut eval­u­ates the prospect of par­a­site-driven brains as just an­other kind of hu­man brain. (I’m pre­sum­ing as well that CEV as cur­rently for­mu­lated is just meant to cover ex­ist­ing hu­mans, not all pos­si­ble hu­mans.) That makes me con­tent to ap­ply CEV to ex­ist­ing hu­mans only—I don’t feel I have to try to ac­count for brain changes due to cul­ture or par­a­sites or what have you by ex­pand­ing it to in­cor­po­rate all of brain space.

(Wherein I seek ad­vice on what may be a fairly im­por­tant de­ci­sion.)

Within the next week, I’ll most likely be offered a sum­mer job where the pri­mary pro­ject will be port­ing a space weather mod­el­ing group’s simu­la­tion code to the GPU plat­form. (This would en­able them to start do­ing pre­dic­tive mod­el­ing of so­lar storms, which are in­creas­ingly hav­ing a big eco­nomic im­pact via dis­rup­tions to power grids and com­mu­ni­ca­tions sys­tems.) If I don’t take the job, the group’s efforts to take ad­van­tage of GPU com­put­ing will likely be de­layed by an­other year or two. This would be a valuable ed­u­ca­tional op­por­tu­nity for me in terms of learn­ing about sci­en­tific com­put­ing and gain­ing gen­eral pro­gram­ming/​de­sign skill; as I hope to start con­tribut­ing to FAI re­search within 5-10 years, this has po­ten­tially big in­stru­men­tal value.

Moore’s Law does make it eas­ier to de­velop AI with­out un­der­stand­ing what you’re do­ing, but that’s not a good thing. Moore’s Law grad­u­ally low­ers the difficulty of build­ing AI, but it doesn’t make Friendly AI any eas­ier. Friendly AI has noth­ing to do with hard­ware; it is a ques­tion of un­der­stand­ing. Once you have just enough com­put­ing power that some­one can build AI if they know ex­actly what they’re do­ing, Moore’s Law is no longer your friend. Moore’s Law is slowly weak­en­ing the shield that pre­vents us from mess­ing around with AI be­fore we re­ally un­der­stand in­tel­li­gence. Even­tu­ally that bar­rier will go down, and if we haven’t mas­tered the art of Friendly AI by that time, we’re in very se­ri­ous trou­ble. Moore’s Law is the count­down and it is tick­ing away. Moore’s Law is the en­emy.

Due to the qual­ity of the mod­els used by the afore­men­tioned re­search group and the pre­vailing level of in­ter­est in more ac­cu­rate mod­els of so­lar weather, suc­cess­ful com­ple­tion of this sum­mer pro­ject will prob­a­bly re­sult in a non­triv­ial in­crease in de­mand for GPUs. It seems that the next best use of my time this sum­mer would be to work full time on the ex­pres­sion-sim­plifi­ca­tion abil­ities of a com­puter alge­bra sys­tem.

Given all this in­for­ma­tion and the goal of re­duc­ing ex­is­ten­tial risk from unFriendly AI, should I take the job with the space weather re­search group, or not? (To avoid an­chor­ing on other peo­ple’s opinions, I’m hop­ing to get in­put from at least a cou­ple of LW read­ers be­fore men­tion­ing the ten­ta­tive con­clu­sion I’ve reached.)

ETA: I fi­nally got an e-mail re­sponse from the re­search group’s point of con­tact and she said all their stu­dent slots have been taken up for this sum­mer, so that ba­si­cally takes care of the de­ci­sion prob­lem. But I might be faced with a similar choice next sum­mer, so I’d still like to hear thoughts on this.

Un­in­formed opinion: space weather mod­el­ling doesn’t seem like a huge mar­ket, es­pe­cially when you com­pare it to the truly mas­sive gam­ing mar­ket. I doubt the in­crease in de­mand would be sig­nifi­cant, and if what you’re wor­ried about is rate of growth, it seems like de­lay­ing it a cou­ple of years would be wholly in­signifi­cant.

The amount you could slow down Moore’s Law by any strat­egy is minus­cule com­pared to the amount you can con­tribute to FAI progress if you choose. It’s like feel­ing guilty over not re­cy­cling a pa­per cup, when you’re plan­ning to be­come a lob­by­ist for an en­vi­ron­men­tal­ist group later.

I’m pretty sure that Roko means the sec­ond. If this idea got men­tioned to Eliezer I’m pretty sure he’d point out the min­i­mal im­pact that any sin­gle hu­man can have on this, even be­fore one gets to whether or not it is a good idea.

I would say that there seem to be a lot of com­pa­nies that are in one way or an­other try­ing to ad­vance Moore’s law. For as long as it doesn’t seem like the one you’re work­ing on has a truly rev­olu­tion­ary ad­van­tage as com­pared to the other com­pa­nies, just tak­ing the money but donat­ing a large por­tion of it to ex­is­ten­tial risk re­duc­tion is prob­a­bly an okay move.

If you get an op­por­tu­nity like that, take it. It’s one thing to gain emo­tional com­fort from be­liev­ing fan­tasies about com­put­ers with mag­i­cal pow­ers, but when fan­tasy is be­ing used as a rea­son to close off real life op­por­tu­nity, some­thing is badly wrong.

The blog of Scott Adams (au­thor of Dilbert) is gen­er­ally quite awe­some from a ra­tio­nal­ist per­spec­tive, but one re­cent post re­ally stood out for me: Hap­piness But­ton.

Sup­pose hu­mans were born with mag­i­cal but­tons on their fore­heads. When some­one else pushes your but­ton, it makes you very happy. But like tick­ling, it only works when some­one else presses it. Imag­ine it’s easy to use. You just reach over, press it once, and the other per­son be­comes wildly happy for a few min­utes.

Karma does make me feel im­por­tant, but when it comes to hap­piness karma can’t hold a can­dle to loud mu­sic, al­co­hol and girls (prefer­ably in com­bi­na­tion). I wish more peo­ple rec­og­nized these for the eter­nal uni­ver­sal val­ues they are. If only some­one in­vented a but­ton to send me some loud mu­sic, al­co­hol and girls, that would be the ul­ti­mate startup ever.

A so­cial cus­tom would be es­tab­lished that but­tons are only to be pressed by knock­ing fore­heads to­gether. Offer­ing to press a but­ton in a fash­ion that doesn’t en­sure mu­tu­al­ity is seen as a pa­thetic dis­play of low sta­tus.

Push­ing some­one’s hap­piness but­ton is like do­ing them a fa­vor, or giv­ing them a gift. Do we have so­cial cus­toms that de­mand fa­vors and gifts always be ex­changed si­mul­ta­neously? Well, there are some cus­toms like that, but in gen­eral no, be­cause we have mem­ory and can keep men­tal score.

Hah. Sta­tus is rel­a­tive, re­mem­ber? Your setup just en­sures that “dodg­ing” at the last mo­ment, get­ting your but­ton pressed with­out press­ing theirs, is seen as a glo­ri­ous dis­play of high sta­tus.

Clas­si­cal game the­o­rists es­tab­lish a sci­en­tific con­sen­sus that the only ra­tio­nal course of ac­tion is not to push the but­tons. Any­one who does is re­garded with con­tempt or pity and gets low­ered in the so­cial stra­tum, be­fore fi­nally man­ag­ing to ra­tio­nal­ize the idea out of con­scious at­ten­tion, with the help of the in­stinct to con­for­mity. A few free-rid­ers smugly teach the re­main­ing naive push­ers a bit­ter les­son, only to stop re­ceiv­ing the benefit. Every­one gets back to busi­ness as usual, crazy peo­ple spin­ning the wheels of a mad world.

I’d be far more will­ing to be­lieve in game the­o­rists call­ing for defec­tion on the iter­ated PD than in math­e­mat­i­ci­ans steer­ing main­stream cul­ture.

How­ever, with the pos­i­tive-sum na­ture of this game, I’d ex­pect the­o­rists to go with Schel­ling in­stead of Nash; and then be com­pletely dis­re­garded by the gen­eral pub­lic who cat­e­go­rize it un­der “phys­i­cal ways of caus­ing plea­sure” and put sex­ual taboos on it.

Here’s what the the­ory ac­tu­ally says: if you know the num­ber of iter­a­tions ex­actly, it’s a Nash equil­ibrium for both to defect on all iter­a­tions. But if you know the chance that this iter­a­tion will be the last, and this chance isn’t too high (e.g. be­low 1⁄3, can’t be both­ered to give an ex­act value right now), it’s a Nash equil­ibrium for both to co­op­er­ate as long as the op­po­nent has co­op­er­ated on pre­vi­ous iter­a­tions.

I ac­tu­ally do think peo­ple in such a world ought not to press but­tons. But not very strongly… only about the same “ought­not­ness” as peo­ple ought not to waste time look­ing at porn.

The ar­gu­ment is the same: Aren’t there bet­ter things we could be do­ing?

Ideally, in but­ton-world, peo­ple will de­vise a way to re­move their but­tons.

But if that couldn’t be done, and we’re se­ri­ously ask­ing “what would hap­pen?” I sup­pose it might end up be­ing treated like sex. Hav­ing one’s but­ton pub­li­cly visi­ble is “in­de­cent”—but­tons are only pushed in pri­vate. Etc. etc.

I dunno, this strikes me as a some­what sex-nega­tive at­ti­tude. Re­spond­ing se­ri­ously to your ques­tion about the bet­ter things we could be do­ing, it strikes me that we peo­ple spend most of our time do­ing worth­less things. We sel­dom re­ally know whether we are happy, what it means to be happy, or how what we are do­ing might con­nect to some­body’s fu­ture hap­piness.

If the but­tons ac­tu­ally made peo­ple happy from time to time, it could be quite use­ful as a ‘re­al­ity check.’ Peo­ple sus­pect­ing that X led to hap­piness could test and falsify their claim by see­ing whether X pro­duced the same men­tal/​emo­tional state that the but­ton did.

Ob­vi­ously we shouldn’t spend all our time press­ing but­tons, hav­ing sex, or look­ing at porn. But I some­times won­der whether we wouldn’t be bet­ter off if most peo­ple, es­pe­cially in the de­vel­oped world where la­bor seems to be over-sup­plied and the op­por­tu­nity cost of not work­ing is low, spent a cou­ple hours a day do­ing things like that.

If the but­tons ac­tu­ally made peo­ple happy from time to time, it could be quite use­ful as a ‘re­al­ity check.’ Peo­ple sus­pect­ing that X led to hap­piness could test and falsify their claim by see­ing whether X pro­duced the same men­tal/​emo­tional state that the but­ton did.

Isn’t that a bit like snort­ing some coke (or per­haps just mas­tur­bat­ing) af­ter a happy ex­pe­rience (say, prov­ing a par­tic­u­larly in­ter­est­ing the­o­rem) to test whether it was re­ally ‘happy’?

There are many differ­ent kinds of ‘hap­piness’, and what makes an ex­pe­rience a happy or an un­happy one is not at all sim­ple to pin down. A kind of hap­piness that one can ob­tain at will, as of­ten as de­sired, and which is un­re­lated to any “ob­jec­tive im­prove­ment” in one­self or the things one cares about, isn’t re­ally hap­piness at all.

Pre­tend it’s new year’s eve and you’re plan­ning some goals for next year—some things that, if you achieve them, you will look back with pride and a sense of ac­com­plish­ment. Is ‘look­ing at lots of porn’ on your list (even as­sum­ing that it’s free and no-one was harmed in pro­duc­ing it)?

I don’t mean to im­ply any­thing about sex, be­cause sex has a whole lot of things as­so­ci­ated with it that make it ex­tremely com­pli­cated. But the ‘plea­sure but­ton’ sce­nario gives us a clean slate to work from, and to me it seems an ob­vi­ous re­duc­tio ad ab­sur­dum of the idea that plea­sure = util­ity.

A kind of hap­piness that one can ob­tain at will, as of­ten as de­sired, and which is un­re­lated to any “ob­jec­tive im­prove­ment” in one­self or the things one cares about, isn’t re­ally hap­piness at all.

Sure it is. It may not be ac­com­plish­ment, or mean­ingful­ness, but it is hap­piness, by defi­ni­tion. I think the con­fu­sion comes be­cause you seem to value many other things more than hap­piness, such as pride and ac­com­plish­ment. Hap­piness is just a feel­ing; it’s not defined as some­thing that you need to value most, or gain the most util­ity from.

How do you dis­t­in­guish a de­gen­er­ate case of ‘hap­piness’ from ‘sa­ti­a­tion of a need’. Is the smoker or heroin ad­dict made ‘happy’ by their fix? Does a glass of wa­ter make you ‘happy’ if you’re dy­ing from thirst, or does it just sa­ti­ate the thirst?

And can’t the same sen­sa­tion be ei­ther ‘happy’ or ‘un­happy’ de­pend­ing on the cir­cum­stances. A per­son with per­sis­tent sex­ual arousal syn­drome isn’t made ‘happy’ by the or­gasms they can’t help but ‘en­dure’.

The idea that there’s a “raw hap­piness feel­ing” de­tach­able from the in­for­ma­tion con­tent that goes with it is in­tu­itively ap­peal­ing but fatally flawed.

And can’t the same sen­sa­tion be ei­ther ‘happy’ or ‘un­happy’ de­pend­ing on the cir­cum­stances? A per­son with per­sis­tent sex­ual arousal syn­drome isn’t made ‘happy’ by the or­gasms they can’t help but ‘en­dure’.

Yes, this is true. We will need to as­sume that the but­ton can an­a­lyze the con­text to de­ter­mine how to provide hap­piness for the par­tic­u­lar brain it’s at­tached to.

My point is that hap­piness is not nec­es­sar­ily as­so­ci­ated with ac­com­plish­ment or ob­jec­tive im­prove­ment in one­self (though it can be). In such a situ­a­tion, some peo­ple might not value this kind of de­tached hap­piness, but that doesn’t mean it’s not hap­piness.

Depends on how you define hap­piness. If you define it as “how much dopamine is in my sys­tem” ,”joy” or “these are the neat brain­waves my brain is giv­ing off” then yes, you could achieve hap­piness by press­ing a but­ton (in the­ory).

A lot of peo­ple seem to as­sume hap­piness = util­ity mea­sured in utilons, which is a whole differ­ent thing al­to­gether.

Sort of like see­ing some one writhe in ec­stasy af­ter jam­ming a nee­dle in their arm and say­ing, “I’m so happy I’m not a heroin ad­dict.”

Depends on how you define hap­piness. If you define it as “how much dopamine is in my sys­tem” ,”joy” or “these are the neat brain­waves my brain is giv­ing off” then yes, you can achieve hap­piness by press­ing a but­ton.

Oh, re­ally? How can I get a cheap, le­gal, re­peat­able dopamine rush to my brain?

Edited my post to re­flect your point. Although, I’m a young male and can achieve or­gasm mul­ti­ple times in un­der ten min­utes with the aid of some lube and free porn. You prob­a­bly didn’t want to know that.

It seems the pharma in­dus­try dis­cov­ered the effect of PDE5 in­hibitors on erec­tile dys­func­tion pretty much by ac­ci­dent. The stuff was ini­tially de­vel­oped to treat heart dis­ease, ini­tial tests showed it didn’t work, but male test sub­jects re­ported a use­ful side effect. Re­minds me of the story of post-it notes: the guy who de­vel­oped them ac­tu­ally wanted to cre­ate the ul­ti­mate glue, but sadly the re­sult of his best efforts didn’t stick very well, so he just went ahead and com­mer­cial­ized what he had.

If big pharma is listen­ing, I’d like to post a re­quest for ex­er­cise pills.

Ac­tu­ally, or­gasms are usu­ally much less in­tense and don’t re­sult in ejac­u­la­tion if I achieve them in un­der a cer­tain amount of amount of time. I find the best are in the 20-30 minute pe­riod.

A lot of peo­ple seem to as­sume hap­piness = util­ity mea­sured in utilons, which is a whole differ­ent thing al­to­gether.

Yes, I’ve no­ticed that as­sump­tion, and I think even Jeremy Ben­tham talked about plea­sure in util­ity terms. I don’t think it’s ac­cu­rate for ev­ery­one, for in­stance, some­one who val­ues ac­com­plish­ment more than hap­piness will as­sign higher util­ity to choices that lead to un­happy ac­com­plish­ment than to un­pro­duc­tive leisure.

That’s a strange defi­ni­tion of “hap­pier”. They’re hap­pier with a choice just be­cause they pre­fer that choice? Even if they ap­pear frus­trated and tired and grumpy all the time? Even if they tell you they’re not happy and they pre­fer this un­hap­piness to not ac­com­plish­ing any­thing?

(In real life, I sus­pect happy peo­ple ac­tu­ally ac­com­plish more, but con­sider a hy­po­thet­i­cal where you have to choose be­tween un­happy ac­com­plish­ment and un­pro­duc­tive leisure.)

Eliezer did this whole thing in the Fun The­ory se­quence. Yes, not do­ing any­thing would be very bor­ing, and be­ing filled with cool drugs sounds like a hor­ror story to my cur­rent util­ity curve. Let’s hope the fu­ture isn’t some form of ironic hell.

AlephNeil, I was tak­ing Scott Adams’ as­ser­tion that the but­ton pro­duces “hap­piness” at face value. I was be­ing rather literal, I’m afraid. I think you’re right to worry that no ac­tual mechanism we can imag­ine in the near fu­ture would act like Scott’s but­ton.

I stand by my point, though, that if we re­ally did have a literal hap­piness but­ton, it would prob­a­bly be a good thing.

As per­haps a some­what more neu­tral ex­am­ple, I like to splash around in a swim­ming pool. It’s fun. I hope to do that a lot over the next year or so. If I suc­cess­fully play in the pool a lot dur­ing time that oth­er­wise might have been spent read­ing marginally in­ter­est­ing ar­ti­cles, star­ing into space, ha­rass­ing room­mates, or work­ing over­time on pro­jects I don’t care about, I will con­sider it a minor ac­com­plish­ment.

More to the point, if reg­u­lar bouts of aquatic play­time keep me well-ad­justed and ac­cu­rately tuned-in to what it means to be happy, then I will ra­tio­nally ex­pect to ac­com­plish all kinds of other things that make me and oth­ers happy. I will con­sider this to be a mod­er­ate ac­com­plish­ment.

There is a differ­ence be­tween plea­sure and util­ity, but I don’t think it’s ridicu­lous at all to have a plea­sure term in one’s util­ity func­tion. A more pleas­ant life, all else be­ing equal, is a bet­ter one. There may be diminish­ing re­turns in­volved, but, well, that’s why we shouldn’t liter­ally spend all day press­ing the but­ton.

I sup­pose it might end up be­ing treated like sex. Hav­ing one’s but­ton pub­li­cly visi­ble is “in­de­cent”—but­tons are only pushed in pri­vate.

The anal­ogy to sex is rough. From a his­tor­i­cal and evolu­tion­ary per­spec­tive, sex is treated the way it is be­cause it leads to gene repli­ca­tion and par­ent­hood, not be­cause it leads to plea­sure. The lack of side effects from the but­tons makes them more com­pa­rable to rub­bing some­one’s back, smil­ing, or say­ing some­thing nice to some­one.

OK—well that’s one pos­si­bil­ity. But in dis­cussing ei­ther of these analo­gies, aren’t we just show­ing (a) that the plea­sure-but­ton sce­nario is un­der­de­ter­mined, be­cause there are many differ­ent kinds of plea­sure and (b) that it’s re­dun­dant, be­cause peo­ple can ac­tu­ally give each other pats on the back, or hand-jobs or what­ever.

How does that work? I sup­pose it makes sense a lit­tle con­sid­er­ing that the world has to go on and can’t stop be­cause ev­ery­ones on the ground be­ing “happy”, but it wouldn’t mean that peo­ple wouldn’t do it, or even that it wouldn’t be the “ra­tio­nal” thing to do.

Is ev­ery­one miss­ing the ob­vi­ous sub­text in the origi­nal ar­ti­cle—that we already live in just such a world but the but­ton is lo­cated not on the fore­head but in the crotch?

Per­haps some peo­ple would give their but­ton-push­ing ser­vices away for free, to any­one who asked. Let’s call those peo­ple gen­er­ous, or as they would be­come known in this hy­po­thet­i­cal world: crazy sluts.

Ex­cept that sex, un­like the but­ton in the story, doesn’t always make peo­ple happy. Some­times, for some peo­ple, it comes with com­pli­ca­tions that de­crease net util­ity. (Also, it is pos­si­ble to push your own but­ton with sex.)

Sure, but it’s not my com­par­i­son—I’m just say­ing it ap­pears to be the ob­vi­ous sub­text of the origi­nal ar­ti­cle.

But­ton push­ing would be­come an is­sue of power and poli­tics within re­la­tion­ships and within busi­ness. The rich and fa­mous would get their but­tons pushed all day long, while the lonely would fan­ta­size about how great that would be.

The rich and fa­mous would get their but­tons pushed all day long, while the lonely would fan­ta­size about how great that would be.

But two poor, “lonely” peo­ple could just get to­gether and push each oth­ers but­tons. Thats the prob­lem with this, any two peo­ple that can co­op­er­ate with each other can get the ad­van­tage. There was once an ex­piri­ment to evolve differ­ent pro­grams in a ge­netic al­gorithm that could play the pris­on­ers dilema. I’m not sure ex­actly how it was or­ga­nized, which would re­ally make or break differ­ent strate­gies, but the re­sult was a pro­gram which always co­op­er­ated ex­cept when the other wasn’t and it con­tinued re­fus­ing to co­op­er­ate with the other un­till it be­lieved they were “even”.

William Sale­tan at Slate is writ­ing a se­ries of ar­ti­cles on the his­tory and uses of mem­ory falsifi­ca­tion, deal­ing mainly with Eliz­a­beth Lof­tus and the ethics of her work. Quote from the lat­est ar­ti­cle:

Lof­tus didn’t flinch at this step. “A ther­a­pist isn’t sup­posed to lie to clients,” she con­ceded. “But there’s noth­ing to stop a par­ent from try­ing some­thing like [mem­ory mod­ifi­ca­tion] with an over­weight child or teen.” Par­ents already lied to kids about Santa Claus and the tooth fairy, she ob­served. To her, it was a no-brainer: “A white lie that might get them to eat broc­coli and as­para­gus vs. a life­time of obe­sity and di­a­betes: Which would you rather have for your kid?”

In­ter­est­ing. I have read sev­eral of Lof­tus’s books, but the last one was The Myth of Re­pressed Me­mory: False Me­mories and Alle­ga­tions of Sex­ual Abuse over ten years ago. I think I’ll go see what she has writ­ten since. Thanks for re­mind­ing me of her work.

There is a small re­mark in Ra­tional Choice in an Uncer­tain World: The Psy­chol­ogy of Judg­ment and De­ci­sion Mak­ing about in­surance say­ing that all in­surance has nega­tive ex­pected util­ity, we pay too high a price for too lit­tle a risk, oth­er­wise in­surance com­pa­nies would go bankrupt.

No—In­surance has nega­tive ex­pected mon­e­tary re­turn, which is not the same as ex­pected util­ity. If your util­ity func­tion obeys the law of diminish­ing marginal util­ity, then it also obeys the law of in­creas­ing marginal di­su­til­ity. So, for ex­am­ple, los­ing 10x will be more than ten times as bad as los­ing x. (Just as gain­ing 10x is less than ten times as good as gain­ing x.)

There­fore, on your util­ity curve, a guaran­teed loss of x can be bet­ter than a 1/​1000 chance of los­ing 1000x.

ETA: If it helps, look at a log­a­r­ith­mic curve and treat it as your util­ity as a func­tion of some quan­tity. Such a curve obeys diminish­ing marginal util­ity. At any given point, your util­ity in­creases less than pro­por­tion­ally go­ing up, but more than pro­por­tion­ally go­ing down.

(In­ci­den­tally, I acu­tally wrote an em­bar­ras­ing ar­ti­cle ar­gu­ing in fa­vor of the the­sis roland pre­sents, and you can still prob­a­bly find on it the in­ter­net. That ex­change is also an ex­am­ple of some­one be­ing bad at ex­plain­ing. If my op­po­nent had sim­ply stated the equiv­alence be­tween DMU and IMD, I would have un­der­stood why that ar­gu­ment about in­surance is wrong. In­stead, he just re­sorted to lots of ex­am­ples of when peo­ple buy in­surance that are to­tally un­con­vinc­ing if you ac­cept the quoted ar­gu­ment.)

I voted this up, but I want to com­ment to point out that this is a re­ally im­por­tant point. Don’t be tricked into not get­ting in­surance just be­cause it has a nega­tive ex­pected mon­e­tary value.

I voted Silas up as well be­cause it’s an im­por­tant point but it shouldn’t be taken as a gen­eral rea­son to buy as much in­surance as pos­si­ble (I doubt Silas in­tended it that way ei­ther). Jonathan_Graehl’s point that you should self-in­sure if you can af­ford to and only take in­surance for risks you can­not af­ford to self-in­sure is prob­a­bly the right bal­ance.

Per­son­ally I don’t di­rectly pay for any in­surance. I live in Canada (uni­ver­sal health cov­er­age) and have ex­tended health in­surance through work (much to my dis­may I can­not de­cline it in fa­vor of cash) which means I have far more health in­surance than I would pur­chase with my own money. Given my aver­sion to pa­per­work I don’t even fully use what I have. I do not own a house or a car which are the other two ar­eas ar­guably worth in­sur­ing. I don’t have de­pen­dents so have no need for life or dis­abil­ity cov­er­age. All other forms of in­surance fall into the ‘self-in­sure’ cat­e­gory for me given my rel­a­tively low risk aver­sion.

Risk is more ex­pen­sive when you have a smaller bankroll. Many slot ma­chines ac­tu­ally offer pos­i­tive ex­pected value pay­outs—they make their re­turn on peo­ple plow­ing their win­nings back in un­til they go broke.

Ci­ta­tion please? A cur­sory search sug­gests that ma­chines go through +EV phases, just like black­jack, but that in­di­vi­d­ual ma­chines are -EV. It’s not just that they ex­pect peo­ple to plow the money back in, but that pros have to wait for fish to plow money in to get to the +EV situ­a­tion.

The differ­ence with black­jack is that you can (in the­ory) ad­just your bet to take ad­van­tage of the differ­ent phases of black­jack. Your first sen­tence seems to match Roland’s com­ment about the Kelly crite­rion (you lose bet­ting against snake eyes if you bet your whole bankroll ev­ery time), but that doesn’t make sense with fixed-bet slots. There, if it made sense to make the first bet, it makes sense to con­tin­u­ing bet­ting af­ter a jack­pot.

This comes up fre­quently in gam­bling and statis­tics cir­cles. “Ci­ta­tion please” is the cor­rect re­sponse—cas­inos do NOT ex­pect to make a profit by offer­ing los­ing (for them) bets and let­ting “gam­bler’s ruin” pay them off. It just doesn’t work that way.

The fact that a +moneyEV bet can be -util­i­tyEV for a gam­bler does NOT im­ply that a -moneyEV bet can be +util­i­tyEV for the cas­ino. It’s -util­ity for both par­ti­ci­pants.

The only rea­son cas­inos offer such bets ever is for pro­mo­tional rea­sons, and they hope to make the money back on differ­ent wa­gers the gam­bler will make while there.

The Kelly calcu­la­tions work just fine for all these bets—for cyclic bets, it ends up you should bet 0 when -EV. When +EV, bet some frac­tion of your bankroll that max­i­mizes mean-log-out­come for each wa­ger.

Be­cause slot ma­chines are de­signed to hook you in, you’re go­ing to get some re­turn on in­vest­ment from them if you hold your­self to a spe­cific amount. At the Cas­ino de Lac Leamy, up in Canada (run, I would add, by the Que­bec provin­cial gov­ern­ment. Now that’s a lot­tery sys­tem), the slots are ‘loose.’ They pay out rel­a­tively of­ten. In fact, when Weds and I have played twenty dol­lars worth of slots to­gether, we’ve never failed to leave the cas­ino floor with more money than we had en­ter­ing the floor. That twenty dol­lars has been any­thing from thirty to sixty-five dol­lars, the three or four times we’ve done this.

I’ll give you that “many” is al­most cer­tainly flat wrong, on re­flec­tion, but such ma­chines are (were?) prob­a­bly out there.

That move was full of false­hoods. For ex­am­ple, peo­ple named Silas are ac­tu­ally no more or less likely than the gen­eral pop­u­la­tion to be tall homi­ci­dal albino monks—but you wouldn’t guess that from see­ing the movie, now, would you?

That twenty dol­lars has been any­thing from thirty to sixty-five dol­lars, the three or four times we’ve done this.

I’m pretty sure it’s not that un­likely to come up ahead ‘three or four’ times when play­ing slot ma­chines (if it weren’t so late I’d ac­tu­ally do the sums). It seems much more plau­si­ble that the blog au­thor was just lucky than that the ma­chines were ac­tu­ally set to reg­u­larly pay out pos­i­tive amounts.

That’s definitely a re­lated re­sult. (So re­lated, in fact, that think­ing about the +EV slots the other day got me won­der­ing what the op­ti­mal frac­tion of your wealth was to bid on an ar­bi­trary bet—which, of course, is just the Kelly crite­rion.)

I’d like to pose a re­lated ques­tion. Why is in­surance struc­tured as up-front pay­ments and un­limited cov­er­age, and not as con­di­tional loans?

For ex­am­ple, one could imag­ine car in­surance as a op­tions con­tract (or per­haps a fu­tures) where if your car is to­taled, you get a loan suffi­cient for re­place­ment. One then pays off the loan with in­ter­est.

The per­son buy­ing this form of in­surance makes fewer pay­ments up­front, re­duc­ing their op­por­tu­nity costs and also the risk of let­ting nsurance lapse due to ran­dom fluc­tu­a­tions. The en­tity sel­l­ing this form of in­surance re­duces the risk of moral haz­ard (ie. some­one tak­ing out in­surance, torch­ing their car, and then let­ting in­surance lapse the next month).

Ex­cept in as­sum­ing strange con­sumer prefer­ences or ir­ra­tional­ity, I don’t see any ob­vi­ous rea­son why this form of in­surance isn’t su­pe­rior to the usual kind.

Well, look at a more ex­treme ex­am­ple. Imag­ine an ac­ci­dent in which you not just to­tal a car, but you’re also on the hook for a large bill in med­i­cal costs, and there’s no way you can af­ford to pay this bill even if it’s trans­muted into a loan with very fa­vor­able terms. With or­di­nary in­surance, you’re off the hook even in this situ­a­tion—ex­cept pos­si­bly for the in­creased fu­ture in­surance costs now that the ac­ci­dent is on your record, which you’ll still likely be able to af­ford.

The goal of in­surance is to trans­fer money from a large mass of peo­ple to a minor­ity that hap­pens to be struck by an im­prob­a­ble catas­trophic event (with the in­surer tak­ing a share as the trans­ac­tion-fa­cil­i­tat­ing mid­dle­man, of course). Thus a small pos­si­bil­ity of a catas­trophic cost is trans­muted into the cer­tainty of a bear­able cost. This wouldn’t be pos­si­ble if in­stead of get­ting you off the hook, the in­surer bur­dened you with an im­mense debt in case of dis­aster.

(A corol­lary of this ob­ser­va­tion is that the no­tion of “health in­surance” is one of the worst mis­nomers to ever en­ter pub­lic cir­cu­la­tion.)

Alright, so this might not work for med­i­cal dis­asters late in life, things that di­rectly af­fect fu­ture earn­ing power. (Some of those could be han­dled by sav­ings made pos­si­ble by not hav­ing to make in­surance pay­ments.)

But that’s just one small area of in­surance. You’ve got hous­ing, cars, un­em­ploy­ment, and this is just what comes to mind for con­sumers, never mind all the cor­po­rate or busi­ness need for in­surance. Are all of those en­tities buy­ing in­surance re­ally not in a po­si­tion to re­pay a loan af­ter a catas­tro­phe’s oc­cur­rence? Even nigh-im­mor­tal in­sti­tu­tions?

I wouldn’t say that the sce­nar­ios I de­scribed are “just one small area of in­surance.” Most things for which peo­ple buy in­surance fit un­der that pat­tern—for a small to mod­er­ate price, you buy the right to claim a large sum that saves you, or at least alle­vi­ates your po­si­tion, if an im­prob­a­ble ru­inous event oc­curs. (Or, in the spe­cific case of life in­surance, that sum is sup­posed to alle­vi­ate the po­si­tion of oth­ers you care about who would suffer if you die un­ex­pect­edly.)

How­ever, it should also be noted that the role of in­surance com­pa­nies is not limited to risk pool­ing. Since in case of dis­aster the bur­den falls on them, they also spe­cial­ize in spe­cific forms of dam­age con­trol (e.g. by ag­gres­sive lawyer­ing, and gen­er­ally by hav­ing non-triv­ial knowl­edge on how to make the best out spe­cific bad situ­a­tions). There­fore, the ex­pected benefit from in­surance might ac­tu­ally be higher than the cost even re­gard­less of risk aver­sion. Of course, in­sur­ers could play the same role within your pro­posed emer­gency loan scheme.

It could also be that cer­tain forms of in­surance are man­dated by reg­u­la­tions even when it comes to in­sti­tu­tions large enough that they’d be bet­ter off pool­ing their own risk, or that you’re not al­lowed to do cer­tain types of trans­ac­tions ex­cept un­der the offi­cial guise of “in­surance.” I’d be sur­prised if the mod­ern in­finitely com­plex mazes of busi­ness reg­u­la­tion don’t give rise to at least some such situ­a­tions.

More­over, there is also the con­fu­sion caused by the fact that gov­ern­ments like to give the name of “in­surance” to var­i­ous pro­grams that have lit­tle or noth­ing to do with ac­tu­ar­ial risk, and in fact rep­re­sent more or less pure trans­fer schemes. (I’m not try­ing to open a dis­cus­sion about the mer­its of such schemes; I’m merely not­ing that they, as a mat­ter of fact, aren’t based on risk pool­ing that is the ba­sis of in­surance in the true sense of the term.)

I wouldn’t say that the sce­nar­ios I de­scribed are “just one small area of in­surance.” Most things for which peo­ple buy in­surance fit un­der that pat­tern—for a small to mod­er­ate price, you buy the right to claim a large sum that saves you, or at least alle­vi­ates your po­si­tion, if an im­prob­a­ble ru­inous event oc­curs.

In­trin­si­cally, the av­er­age per­son must pay in more than they get out. Other­wise the in­surance com­pany would go bankrupt.

Since in case of dis­aster the bur­den falls on them, they also spe­cial­ize in spe­cific forms of dam­age con­trol (e.g. by ag­gres­sive lawyer­ing, and gen­er­ally by hav­ing non-triv­ial knowl­edge on how to make the best out spe­cific bad situ­a­tions).

No rea­son a loan style in­surance com­pany couldn’t do the ex­act same thing.

I’d be sur­prised if the mod­ern in­finitely com­plex mazes of busi­ness reg­u­la­tion don’t give rise to at least some such situ­a­tions.

‘Rent-seek­ing’ and ‘reg­u­la­tory cap­ture’ are cer­tainly good an­swers to the ques­tion why doesn’t this ex­ist.

For one thing, in­surance makes ex­penses more pre­dictable; though the de­sire for pre­dictabil­ity (in or­der to bud­get, or the like) does prob­a­bly in­di­cate ir­ra­tional­ity and/​or bounded ra­tio­nal­ity.

What’s un­pre­dictable about a loan? You can pre­dict what you’ll be pay­ing pretty darn pre­cisely, and there’s no in­trin­sic rea­son that your monthly loan re­pay­ments would have to be higher than your in­surance pre-pay­ments.

Ob­vi­ously if you know your util­ity func­tion and the true dis­tri­bu­tion of pos­si­ble risks, it’s easy to de­cide whether to take a par­tic­u­lar in­surance deal.

The stan­dard ad­vice is that if you can af­ford to self-in­sure, you should, for the rea­son you cite (that in­surance com­pa­nies make a profit, on av­er­age).

That’s a heuris­tic that holds up fine ex­cept when you know (for rea­sons you will keep se­cret from in­sur­ers) your own risk is higher than they could ex­pect; then, de­pend­ing on how com­pet­i­tive in­sur­ers are, even if you’re not too risk-averse, you might find a good deal, even to the ex­tent that you turn an ex­pected (dis­counted) profit, and so should buy it even if you have zero risk aver­sion. Ap­par­ently in Cal­ifor­nia, auto in­sur­ers are re­quired to pub­lish the al­gorithm by which they as­sign pre­miums (and are pos­si­bly pro­hibited from us­ing cer­tain types of in­for­ma­tion).

Con­versely, you may choose to have no in­surance (or ex­tremely high de­ductible) in cases where you be­lieve your per­sonal risk is far be­low what the in­surer ap­pears to be­lieve, even when you’re ac­tu­ally averse to that risk.

Of course, it’s not suffi­cient to know how wrong the in­surer’s es­ti­mate of your risk is; they in­sist on a pretty wide vig—not just to sur­vive both un­cer­tain­ties in their es­ti­ma­tion of risk and the mar­ket re­turns on the float, but also to com­pen­sate for the ob­served amount of suc­cess­ful ad­verse se­lec­tion that re­sults from peo­ple ap­ply­ing the above heuris­tic.

I sup­pose it may also be pos­si­ble that the in­surer won’t pay. I don’t know what ex­actly what guaran­tees we have in the U.S.

to com­pen­sate for the ob­served amount of suc­cess­ful ad­verse se­lec­tion that re­sults from peo­ple ap­ply­ing the above heuris­tic.

Ac­tu­ally, I think that for vol­un­tary in­surance, the ob­served ad­verse se­lec­tion is nega­tive, but I can’t find the cite. Peo­ple sim­ply don’t do cost-benefit calcu­la­tions. Peo­ple who buy in­surance are those who are ter­ribly risk-averse or see it as part of their role. Such peo­ple tend to be more care­ful than the gen­eral pop­u­la­tion. In a com­pet­i­tive mar­ket, the price of in­surance would be bid down to re­flect this, but it isn’t.

Some in­surances are not worth get­ting, ob­vi­ously. Like in­surance on lap­tops or mu­sic play­ers. But that in­surance in gen­eral has nega­tive ex­pected util­ity as­sumes no risk aver­sion. If you can han­dle the risks on your own—if you are effec­tively self-in­sur­ing—then you prob­a­bly should do that. But a house burn­ing down or get­ting a rare can­cer that will cost mil­lions to treat: these are not self-in­surable things un­less you are a mil­lion­aire.

If the ex­ist­ing model is sex­ual di­mor­phism, with high sex­ual de­sire a male trait, you could sim­ply sup­pose that it’s a “leaky” di­mor­phism, in which the sex-linked traits nonethe­less show up in the other sex with some fre­quency. In hu­mans this should es­pe­cially be pos­si­ble with male traits which de­pend not on the Y chro­mo­some, but rather on hav­ing one X chro­mo­some rather than two. That means that there is only one copy, rather than two, of the rele­vant gene, which means trait var­i­ance can be greater—in a woman, an un­usual allele on one X chro­mo­some may be diluted by a nor­mal allele on the other X, whereas a man with an un­usual X allele has no such coun­ter­bal­ance. But it would still be easy enough for a woman to end up with an un­usual allele on both her Xs.

Also, re­gard­less of the spe­cific ge­netic mechanism, hu­man di­mor­phism is just not very ex­treme or ab­solute (com­pared to many other species), and forms in­ter­me­di­ate be­tween stereo­typ­i­cal male and fe­male ex­tremes are quite com­mon.

I thought it was pretty clear. Sex­ual Di­mor­phism doesn’t op­er­ate the way you think it does. Women with high sex drives aren’t rare at all.

I have heard that, for most men and most women, the time of high­est sex drive hap­pens at very differ­ent times (much younger for men than women). This might ac­count for the en­tire differ­ence, es­pe­cially if your’e get­ting most of your in­for­ma­tion from the cul­ture at large. As TVTropes will tell you, Most Writ­ers Are Male.

This ques­tion reads to me like it’s out of the mid­dle of some dis­cus­sion I didn’t hear the be­gin­ning of. Why were “nymphoma­ni­acs” on your mind in the first place? What do you mean by the word? I don’t think I’ve heard it in many years, and I as­so­ci­ate it with the sex­ual su­per­sti­tions of a former age.

What does the word “nymphoma­ni­acs” mean? How do you judge some­one to be suffi­ciently ob­sessed with sex to be a nymphoma­niac? I think a lot of your con­fu­sion might be com­ing from you ten­dency to la­bel peo­ple with this word with such nega­tive con­no­ta­tions.

Does the ques­tion “what is with women who want to have sex [five times a week*] and will un­der­take to get it?” re­solve any of your con­fu­sion? You should ex­pect that those women who have more sex to be more salient wrt peo­ple talk­ing about them, so they would seem more promi­nent, even if only 2% of the pop­u­la­tion.

I think that’s kinda my point. I was at­tempt­ing to point out that he’s prob­a­bly con­fus­ing the term “nymphoma­niac” with its nega­tive con­no­ta­tions, with “likes to have [vaguely defined ‘a lot’] of sex.”

“Nym­phoma­niac” hasn’t been a clini­cal di­ag­no­sis for a long time. In my ex­pe­rience, the word is now most com­monly used col­lo­quially to mean “a woman who likes to have a lot of sex”. Whether this has nega­tive con­no­ta­tions de­pends on your at­ti­tude to sex, I sup­pose.

Pick­ing a num­ber for this seems like a re­ally bad idea. For most mod­ern clini­cal defi­ni­tions of di­s­or­ders what mat­ters is whether it in­terferes with nor­mal daily be­hav­ior. Even that is ques­tion­able since what con­sti­tutes in­terfer­ence is very hard to tell.

So­cieties have had very differ­ent no­tions of what is ac­cept­able sex­u­al­ity for both males and fe­males. Un­til fairly re­cent ho­mo­sex­u­al­ity was con­sid­ered a men­tal di­s­or­der in the US. And in the Vic­to­rian era, women were rou­tinely di­ag­nosed as nymphoma­ni­acs for show­ing pretty min­i­mal signs of sex­u­al­ity.

You should only as­sign a cal­ibrated con­fi­dence of 98% if you’re con­fi­dent enough that you think you could an­swer a hun­dred similar ques­tions, of equal difficulty, one af­ter the other, each in­de­pen­dent from the oth­ers, and be wrong, on av­er­age, about twice. We’ll keep track of how of­ten you’re right, over time, and if it turns out that when you say “90% sure” you’re right about 7 times out of 10, then we’ll say you’re poorly cal­ibrated.

...

What we mean by “prob­a­bil­ity” is that if you ut­ter the words “two per­cent prob­a­bil­ity” on fifty in­de­pen­dent oc­ca­sions, it bet­ter not hap­pen more than once

...

If you say “98% prob­a­ble” a thou­sand times, and you are sur­prised only five times, we still ding you for poor cal­ibra­tion. You’re al­lo­cat­ing too much prob­a­bil­ity mass to the pos­si­bil­ity that you’re wrong. You should say “99.5% prob­a­ble” to max­i­mize your score. The scor­ing rule re­wards ac­cu­rate cal­ibra­tion, en­courag­ing nei­ther hu­mil­ity nor ar­ro­gance.

So I have a ques­tion. Is this not an en­dorse­ment of fre­quen­tism? I don’t think I un­der­stand fully, but isn’t count­ing the in­stances of the event ex­actly fre­quen­tist method­ol­ogy? How could this be Bayesian?

As I un­der­stand it, fre­quen­tism re­quires large num­bers of events for its in­ter­pre­ta­tion of prob­a­bil­ity, whereas the bayesian in­ter­pre­ta­tion al­lows the con­ver­gence of rel­a­tive fre­quen­cies with prob­a­bil­ities but claims that prob­a­bil­ity is a mean­ingful con­cept when ap­plied to unique events, as a “de­gree of plau­si­bil­ity”.

Do you (or any­one else read­ing this) know of any at­tempts to give a pre­cise non-fre­quen­tist in­ter­pre­ta­tion of the ex­act nu­mer­i­cal val­ues of Bayesian prob­a­bil­ities? What I mean is some­one try­ing to give a pre­cise mean­ing to the claim that the “de­gree of plau­si­bil­ity” of a hy­poth­e­sis (or pre­dic­tion or what­ever) is, say, 0.98, which wouldn’t boil down to the fre­quen­tist ob­ser­va­tion that rel­a­tive to some refer­ence class, it would be right 98⁄100 of the time, as in the above quoted ex­am­ple.

Or to put it in a way that might per­haps be clearer, sup­pose we’re deal­ing with the claim that the “de­gree of plau­si­bil­ity” of a hy­poth­e­sis is 0.2. Not 0.19, or 0.21, or even 0.1999 or 0.2001, but ex­actly that spe­cific value. Now, I have no in­tu­ition what­so­ever for what it might mean that the “de­gree of plau­si­bil­ity” I as­sign to some propo­si­tion is equal to one of these num­bers and not any of the other men­tioned ones—ex­cept if I can con­ceive of an ex­per­i­ment or ob­ser­va­tion (or at least a thought-ex­per­i­ment) that would yield that par­tic­u­lar ex­act num­ber via a fre­quen­tist ra­tio.

I’m not try­ing to open the whole Bayesian vs. fre­quen­tist can of worms at this mo­ment; I’d just like to find out if I’ve missed any sig­nifi­cant refer­ences that dis­cuss this par­tic­u­lar ques­tion.

Yes, I re­mem­ber read­ing that post a while ago when I was still just lurk­ing here. But I for­got about it in the mean­time, so thanks for bring­ing it to my at­ten­tion again. It’s some­thing I’ll definitely need to think about more.

In the Bayesian in­ter­pre­ta­tion, the nu­mer­i­cal value of a prob­a­bil­ity is de­rived via con­sid­er­a­tions such as the prin­ci­ple of in­differ­ence—if I know noth­ing more about propos­i­ton A than I know about propo­si­tion B, then I hold both equally prob­a­ble. (So, if all I know about a coin is that it is a bi­ased coin, with­out know­ing how it is bi­ased, I still hold Heads or Tails equally prob­a­ble out­comes of the next coin flip.)

If we do know some­thing more about A or B, then we can ap­ply for­mu­lae such as the sum rule or product rule, or Bayes’ rule which is de­rived from them, to ob­tain a “pos­te­rior prob­a­bil­ity” based on our ini­tial es­ti­ma­tion (or “prior prob­a­bil­ity”). (In the coin ex­am­ple, I would be able to take into ac­count any num­ber of coin flips as ev­i­dence, but I would first need to spec­ify through such a prior prob­a­bil­ity what I take “a bi­ased coin” to mean in terms of prob­a­bil­ity; whereas a fre­quen­tist ap­proach re­lies only on flip­ping the coin enough times to reach a given de­gree of con­fi­dence.)

(Note, this is my un­der­stand­ing based on hav­ing par­tially read through pre­cisely one text—Jaynes’ Prob­a­bil­ity The­ory—on top of some Web brows­ing; not an ex­pert’s opinion.)

Yes, you can do this pre­cisely with mea­sure the­ory, but some will ar­gue that that is nice math but not a philo­soph­i­cally satis­fy­ing ap­proach.

I’m not sure I un­der­stand what ex­actly you have in mind. I am aware of the role of mea­sure the­ory in the stan­dard mod­ern for­mal­iza­tion of prob­a­bil­ity the­ory, and how it pro­vides for a neat treat­ment of con­tin­u­ous prob­a­bil­ity dis­tri­bu­tions. How­ever, what I’m in­ter­ested in is not the math, but the mean­ing of the num­bers in the real world.

Bayesi­ans of­ten make claims like, say, “I as­sign the prob­a­bil­ity of 0.2 to the hy­poth­e­sis/​pre­dic­tion X.” This is a fac­tual claim, which as­serts that some quan­tity is equal to 0.2, not any other num­ber. This means that those mak­ing such claims should be able to point at some ob­serv­able prop­erty of the real world re­lated to X that gives rise to this par­tic­u­lar num­ber, not some other one. What I’d like to find out is whether there are at­tempts at non-fre­quen­tist re­sponses to this sort of re­quest.

Edit: A more con­crete ap­proach is to just think about it as what bets you should make about pos­si­ble out­comes.

But it seems to me that bet­ting ad­vice is fun­da­men­tally fre­quen­tist in na­ture. As far as I can see, the only prac­ti­cal test of whether a bet­ting strat­egy is good or bad is the ex­pected gain (or loss) it will provide over a large num­ber of bets. [Edit: I should have been more clear here—I as­sume that you are not us­ing an in­co­her­ent strat­egy vuln­er­a­ble to a Dutch book. I had in mind strate­gies where you re­spect the ax­ioms of prob­a­bil­ity, and the only ques­tion is which num­bers con­sis­tent with them you choose.]

Bayesi­ans, would say that the prob­a­bil­ity is (some func­tion of) the ex­pected value of one bet.

Fre­quen­tists, would say that it is (some func­tion of) the ac­tual value of many bets (as the amount of bets goes to in­finity).

The whole point of look­ing at many bets is to make the av­er­age value close to the ex­pected value (so that fre­quen­tists don’t have to think about what “ex­pected” ac­tu­ally means). You never have to say “the ex­pected gain … over a large num­ber of bets.” That would be re­dun­dant.

What does “ex­pected” ac­tu­ally mean? It’s just the prob­a­bilty you should bet at to avoid the pos­si­bil­ity of be­ing Dutch-booked on any sin­gle bet.

ETA: When you are be­ing Dutch-booked, you don’t get to look at all the offered bets at once and say “hold on a minute, you’re try­ing to trick me”. You get given each of the bets one at a time, and you have to bet Bayesi­anly for each one if you want to avoid any pos­si­bil­ity of sure losses.

I might be mis­taken, but I think this still doesn’t an­swer my ques­tion. I un­der­stand—or at least I think I do—how the Dutch book ar­gu­ment can be used to es­tab­lish the ax­ioms of prob­a­bil­ity and the en­tire math­e­mat­i­cal the­ory that fol­lows from them (in­clud­ing the Bayes the­o­rem).

The way I un­der­stand it, this ar­gu­ment says that once I’ve as­signed some prob­a­bil­ity to an event, I must as­sign all the other prob­a­bil­ities in a way con­sis­tent with the prob­a­bil­ity ax­ioms. For ex­am­ple, if I as­sign P(A) = 0.3 and P(B) = 0.4, I would be open­ing my­self to a Dutch book if I as­signed, say, P(~A) != 0.7 or P(A and B) > 0.3. So far, so good.

How­ever, I still don’t see what, if any­thing, the Dutch book ar­gu­ment tells us about the ul­ti­mate mean­ing of the prob­a­bil­ity num­bers. If I claim that the prob­a­bil­ity of Elbo­nia declar­ing war on Ru­ri­ta­nia be­fore next Christ­mas is 0.3, then to avoid be­ing Dutch-booked, I must main­tain that the prob­a­bil­ity of that event not hap­pen­ing is 0.7, and all the other stuff ne­ces­si­tated by the prob­a­bil­ity ax­ioms. How­ever, if some­one comes to me and claims that the prob­a­bil­ity is not 0.3, but 0.4 in­stead, in what way could he ar­gue, un­der any imag­in­able cir­cum­stances and ei­ther be­fore or af­ter the fact, that his figure is cor­rect and mine not? What fact ob­serv­able in phys­i­cal re­al­ity could he point out and say that it’s con­sis­tent with one num­ber, but not the other?

I un­der­stand that if we both stick to our differ­ent prob­a­bil­ities and make bets based on them, we can get Dutch-booked col­lec­tively (some­one sells him a bet that pays off $100 if the war breaks out for $39, and to me a bet that pays off $100 in the re­verse case for $69 -- and wins $8 what­ever hap­pens). But this merely tells us that some­thing ir­ra­tional is go­ing on if we in­sist (and act) on differ­ent prob­a­bil­ity es­ti­mates. It doesn’t tell us, as far as I can see, how one num­ber could be cor­rect, and all oth­ers in­cor­rect—un­less we start talk­ing about a large refer­ence class of events and fre­quen­cies at some point.