2011 Less Wrong Census /​ Survey

The fi­nal straw was notic­ing a com­ment refer­ring to “the most re­cent sur­vey I know of” and re­al­iz­ing it was from May 2009. I think it is well past time for an­other sur­vey, so here is one now.

I’ve tried to keep the struc­ture of the last sur­vey in­tact so it will be easy to com­pare re­sults and see changes over time, but there were a few prob­lems with the last sur­vey that re­quired changes, and a few ques­tions from the last sur­vey that just didn’t ap­ply as much any­more (how many peo­ple have strong feel­ings on Three Wor­lds Col­lide these days?)

Please try to give se­ri­ous an­swers that are easy to pro­cess by com­puter (see the in­tro­duc­tion). And please let me know as soon as pos­si­ble if there are any se­cu­rity prob­lems (peo­ple other than me who can ac­cess the data) or any ab­solutely awful ques­tions.

I will prob­a­bly run the sur­vey for about a month un­less new peo­ple stop re­spond­ing well be­fore that. Like the last sur­vey, I’ll try to calcu­late some re­sults my­self and re­lease the raw data (minus the peo­ple who want to keep theirs pri­vate) for any­one else who wants to ex­am­ine it.

Like the last sur­vey, if you take it and post that you took it here, I will up­vote you, and I hope other peo­ple will up­vote you too.

I hate cog­ni­tive bi­ases. I read your com­ment right be­fore I went to take the test. “Ha!” I thought to my­self, “clearly mem­bers of Less Wrong wouldn’t be as effected. Why even bother men­tion­ing it?” And then I clicked on the link while I thought about the sin­gu­lar­ity. “Hmm, 2100 is a de­cent year maybe it’ll be 20 years be­fore that though...” And I filled in my race/​ed­u­ca­tion/​sex. “Hmm maybe it would be af­ter that though, due to...oh god, it’s the an­chor­ing effect! Quick think of other num­bers! 2090! 2110! Dam­nit. 1776! Wait that won’t work...”

And as I slowly worked my way down, by brain tried in vain to come up with al­ter­nate years. Un­til I fi­nally reached the prob­lem. “Is this re­ally what I think, or am I just putting this an­swer be­cause of that com­ment in the thread?” But it didn’t mat­ter. The num­bers were in the box, and I couldn’t con­vince my­self to change them.

There it stood: 2100.

PS. Yvain, any chance you could look at the mean/​me­dian/​mode/​stan­dard de­vi­a­tion of that prob­lem be­fore and af­ter you changed the ques­tions around? I’d be very in­ter­ested in see­ing how peo­ple were effected by an­chor­ing.

After read­ing the feed­back I’ve made the fol­low­ing changes (af­ter the first 104 en­tries so that any­one who has ac­cess to the data can check if there are sig­nifi­cant differ­ences be­fore and af­ter these changes):

Added an “other” op­tion in gender

Moved “date of sin­gu­lar­ity” above ques­tion men­tion­ing 2100 to avoid an­chor­ing. Really I should also move the New­ton ques­tion for the same rea­son, but I’m not go­ing to.

changed word­ing of anti-agath­ics ques­tion to “at least one per­son”

added a “don’t know /​ no prefer­ence” to re­la­tion­ship style

clar­ified to an­swer prob­a­bil­ity as per­cent and not dec­i­mal; I’ll go back and fix any­one who got this wrong, though. If you se­ri­ously mean a very low per­cent, like ”.05%”, please end with a per­cent mark so I know not to change it. Other­wise, leave the per­cent mark out.

Added a “gov­ern­ment work” op­tion.

Deleted “di­vorced”. Divorced peo­ple can just put “sin­gle”

Added “eco­nomic/​poli­ti­cal col­lapse” to xrisk

Added “other” to xrisk

Added a ques­tion “Have you ever been to a Less Wrong meetup?” Please do NOT re­take the sur­vey to an­swer this ques­tion. I’ll just grab statis­tics from the peo­ple who an­swered this af­ter it was put up, while rec­og­niz­ing it might be flawed.

I did NOT add an “Other” to poli­tics de­spite re­quests to do so, be­cause I tried this last time and ended up with peo­ple send­ing me man­i­festos. I want to en­courage peo­ple to choose whichever of those cat­e­gories they’re clos­est to. If you re­ally don’t iden­tify at all with any of those cat­e­gories, just leave it blank.

(It might be bet­ter to add a box ask­ing for marks/​cer­tifi­cate re­ceived upon leav­ing high school and the name of the pro­gram; with suffi­cient re­spon­dents there may be enough data to say mean­ingful things)

Also, do I un­der­stand you cor­rectly that the be­ings (con­ceiv­ably) run­ning the uni­verse as a simu­la­tion do not count as su­per­nat­u­ral/​gods for pur­poses of the su­per­nat­u­ral/​gods ques­tions?

I feel like sev­eral of the sin­gle-punch ques­tions should be multi-punch. Both “pro­fes­sion” and “Work sta­tus” gave me pause. Also, I had to figure out what the right thing to fill in for “fam­ily re­li­gion” was, since we had sev­eral.

And there are sev­eral ex­tremely com­mon moral views not rep­re­sented in your list of moral the­o­ries. One of the more pop­u­lar is “All moral the­o­ries have some grain of truth, and we should use a com­bi­na­tion along with our in­tu­ition”. For ques­tions like this, you might use as your model the Philpa­pers sur­vey, though I also worry that this ques­tion might not make a lot of sense to most peo­ple with­out at least rough defi­ni­tions alongside the an­swer choices.

About the gov­ern­ment work is­sue, if I work for an aerospace com­pany that gets all of its busi­ness from the gov­ern­ment, does that count as “for profit” or “gov­ern­ment work” for pur­poses of the ques­tion?

I did take the sur­vey, how­ever I found some­thing I was un­sure of what to put down and had to type in an ex­pla­na­tion/​ques­tion about:

It was for the ques­tion: “By what year do you think the Sin­gu­lar­ity will oc­cur? An­swer such that you think there is an even chance of the Sin­gu­lar­ity fal­ling be­fore or af­ter that year. If you don’t think a Sin­gu­lar­ity will ever hap­pen, leave blank.”

If I think the sin­gu­lar­ity is slightly less than 50% likely over­all, what should I have put? It seemed off to leave it blank and im­ply I be­lieved “I don’t think a Sin­gu­lar­ity will ever hap­pen” be­cause that state­ment seemed to con­vey a great deal more cer­tainty than 50+ep­silon%. How­ever, if I ac­tu­ally be­lieved there was a less than 50% chance of it hap­pen­ing, I’m not go­ing to reach an even chance of hap­pen­ing or not hap­pen­ing on any par­tic­u­lar year.

As a side note, af­ter tak­ing that test, I re­al­ized that I don’t feel very con­fi­dent on a sub­stan­tial num­ber of things.

I think that there need to be two sep­a­rate ques­tions here. Prob­a­bil­ity of Sin­gu­lar­ity, and year it hap­pens if it does. For in­stance, I’d guess about 70% chance of a sin­gu­lar­ity at all, but if it hap­pens, 2040 would be about my ex­pected date. You can’t de­scribe these two state­ments in just one num­ber.

I in­ter­preted this as “there is an even chance of the Sin­gu­lar­ity fal­ling be­fore or af­ter, [as­sum­ing it does]”. That is, if you think the prob­a­bil­ity that the Sin­gu­lar­ity will hap­pen is some­thing low like 1%, you should an­swer a year such that the prob­a­bil­ity it hap­pens by that year is 0.5%. The only way you can’t an­swer it is if you’re sure it won’t ever hap­pen.

(For ex­am­ple, if I thought a Sin­gu­lar­ity is very [...] very hard to achieve, I might an­swer 5000 AD or 500000 AD, de­pend­ing on how many “very” there are, even though I might put a very low prob­a­bil­ity on our civ­i­liza­tion ac­tu­ally surv­ing that long.)

Given the ex­pected date would be skewed to in­finity by a non-zero es­ti­mate of the Sin­gu­lar­ity not oc­cur­ring, you can prob­a­bly put your es­ti­mate of the year X so that P( S ⇐ X | C ) = 0.5, where S is the statis­tic “Year sin­gu­lar­ity will oc­cur” and C is the event “Sin­gu­lar­ity will oc­cur”.

On the “Poli­ti­cal” ques­tion: I iden­tify with none of those. I un­der­stand the ques­tion is about which I iden­tify with most, but all of the op­tions have views on both so­cial per­mis­sivity and eco­nomic re­dis­tri­bu­tion. I am so­cially per­mis­sive, but have no be­lief one way or the other on re­dis­tri­bu­tion/​taxes. I sim­ply have in­suffi­cient knowl­edge of that area to make a judg­ment. Per­haps it would be bet­ter to have two differ­ent ques­tions—one for each of so­cial views and eco­nomic views?

For “Reli­gious views”: I am an athe­ist but would not self-iden­tify as ei­ther “spiritual” or “not spiritual”. If a per­son asked me which I was, I would ask them what they meant by spiritual. I an­swered “Athe­ist but not spiritual”, on the very weak grounds that I sus­pect I do not satisfy most other peo­ple’s con­cep­tions of spiritu­al­ity; but re­ally, the word is very ill-defined.

I sec­ond rntz re­marks, I had very similar is­sues with both ques­tions. As a side note I would have been also in­ter­ested in know­ing how many peo­ple here are from non-en­glish speak­ing coun­tries (or at least out­side the US).

Any­ways, this is a very in­ter­est­ing pro­ject, I’ll be look­ing for­ward for the re­sults!

If there is a poli­ti­cal self-de­scrip­tion cat­e­gory in fu­ture sur­veys, an­other op­tion pos­si­bly worth adding is “an­ar­chist”. Yeah, it’s rare, but the clos­est op­tion available was “so­cial­ist”, which is still very dis­similar.

In­ci­den­tally, for those who are in­ter­ested in poli­ti­cal cat­e­go­riza­tions that might trans­late bet­ter across coun­tries (and who have an OkCupid ac­count), check out the Poli­ti­cal Ob­jec­tives test. A caveat is that, as the test it­self notes, it is still spe­cific to the coun­tries and cen­turies that con­sti­tute the mod­ern world, as “The as­sump­tion be­hind this test is that the three most im­por­tant ob­jec­tives of all-is­sues poli­ti­cal move­ments in the mod­ern era have been Equal­ity and Liberty and Sta­bil­ity.”

the three most im­por­tant ob­jec­tives of all-is­sues poli­ti­cal move­ments in the mod­ern era have been Equal­ity and Liberty and Stability

In­ter­est­ing. I won­der if this might be fram­ing too much—it seems like if some­one ac­cepted this, then a poli­ti­cal move­ment that val­ued only two of those might a pri­ori be clas­sified as not “all-is­sues”.

Done. Definitely went through the whole “check the pub­li­ca­tion date”—whoop of vic­tory—worry I was un­der­con­fi­dent rou­tine. Ex­cept silently be­cause there’s a sleep­ing per­son less than a foot away.

I’m amazed at the range of pos­si­bil­ities I con­sid­ered for some of those prob­a­bil­ities. I definitely do not have a solid grasp of re­al­ity.

I think it is gen­er­ally good to avoid “other” op­tions as much as pos­si­ble.

There are a few bi­ases re­lated to filling ques­tion­naires. For ex­am­ple, many psy­cholog­i­cal tests ask you the same ques­tion twice, in op­po­site di­rec­tion. (Ques­tion #13 “Do you think Sin­gu­lar­ity will hap­pen?” Ques­tion #74: “Do you think Sin­gu­lar­ity will never hap­pen?”) This is be­cause some peo­ple use heuris­tics “when un­sure, say yes” and some other peo­ple use heuris­tics “when un­sure, say no”. So when you get two “yes” an­swers or two “no” an­swers to op­po­site forms of the ques­tion, you know that the per­son did not re­ally an­swer the ques­tion.

Another bias is that when given three choices “yes”, “no” and “maybe”, some peo­ple will mostly choose “yes” or “no” an­swers, while oth­ers will pre­fer “maybe” an­swers. It does not nece­sar­ily mean that they have differ­ent opinions on the sub­ject. It may pos­si­bly mean that they both think “yes, with 80% cer­tainty”, but for one of them this means “yes”, and for the other one this means “maybe”. So in­stead of mea­sur­ing their opinions on the sub­ject, you are mea­sur­ing their opinions on how much cer­tainty is nec­es­sary to an­swer “yes” or “no” in the ques­tion­naire.

Per­haps in some situ­a­tions the “other” op­tion is nec­es­sary, be­cause for some peo­ple none of the available op­tions is good even as a very rough ap­prox­i­ma­tion. But I think it should be used very care­fully, be­cause it en­courages the “I am a spe­cial snowflake” bias. For ex­am­ple, if some­one has no sex­ual feel­ings at all, then of course the “monogamy or polygamy” ques­tion does not make sense for them. But if it is “I like the idea of be­ing in love with one spe­cial per­son, but I also like the idea of hav­ing sex­ual ac­cess to many at­trac­tive peo­ple” then IMHO this at­ti­tude does not de­serve a sep­a­rate cat­e­gory and can be rounded to­wards one of the choices.

a) a sur­vey, where ev­ery­one’s in­di­vi­d­ual differ­ences are rounded into a few given cat­e­gories;

b) a col­lec­tion of blog ar­ti­cles, where ev­ery­one de­scribes them­selves ex­actly as they de­sire; or

c) a kind of sur­vey, where some par­ti­ci­pants send a blog ar­ti­cle in­stead of data.

Both (a) and (b) are valid op­tions, each of them serves a differ­ent pur­pose. I would pre­fer to avoid (c), be­cause it tries to do both things at the same time, and ac­com­plishes nei­ther. An an­swer “other” some­times means “no an­swer is even ap­prox­i­mately cor­rect”, but some­times is just means “I pre­fer to send you a blog ar­ti­cle in­stead of sur­vey data”. The first ob­jec­tion is valid, and is IMHO equiv­a­lent to sim­ply not an­swer­ing that ques­tion. The sec­ond ob­jec­tion seems more like re­fus­ing the idea of statis­tics. Statis­tics does not mean that peo­ple who gave the same an­swer are all perfectly al­ike, but ig­nor­ing the minor differ­ences al­lows us to see the for­est in­stead of the trees.

I guess the “spe­cial snowflake bias” is offi­cially called “nar­cis­sism of small differ­ences”. The psy­cholog­i­cal foun­da­tion is that we have a need of iden­tity, which is threat­ened by similar things, not differ­ent ones. So when some­thing is similar to us, but not the same, we ex­ag­ger­ate the differ­ence and down­play the similar­ity. From out­side view we are prob­a­bly less differ­ent than from in­side view.

That last varies—some­times peo­ple are ex­ag­ger­at­ing differ­ences which are pretty mean­ingless. Some­times the peo­ple set­ting up the clas­sifi­ca­tions ac­tu­ally have an in­com­plete pic­ture of the ex­ist­ing cat­e­gories.

Prob­lem is that sur­vey re­sults will be treated as if ev­ery­one had ex­act an­swers, as op­posed to pick­ing the least ter­rible ap­prox­i­ma­tion. (I do have a known prefer­ence, dammit! It’s just the sub­ject of Big De­bate whether it counts as mono or as poly.)

So how you do de­cide which op­tions merit in­clu­sion? Which snowflakes are spe­cial enough—or, I sup­pose, mun­dane enough? And what’s the harm in count­ing how many snowflakes aren’t, even if you don’t ask them ex­actly what type they are?

If you are go­ing to in­clude trans­gen­der, you prob­a­bly should call the oth­ers cis. Other­wise you run the risk of im­ply­ing trans­gen­dered peo­ple are not “re­ally” their tar­get gen­der, which is a mess.

The ques­tion of aca­demic field was poorly phrased. I’m not an aca­demic, so I as­sumed you meant what aca­demic field was most rele­vant to my work. But you re­ally should ask this ques­tion with­out refer­ring to academia.

The aca­demic ques­tion and the ques­tion about field of work need more op­tions.

“Athe­ist” refers to the lack of a be­lief in gods. “Spiritual” in­cludes all sorts of other su­per­nat­u­ral no­tions, like ghosts, non-phys­i­cal minds, souls, magic, an­i­mistic spirits, mys­ti­cal en­er­gies, etc. Also, “spiritual” can re­fer to a way of look­ing at the world ex­em­plified by re­li­gions that some athe­ists con­sider a vi­tal part of the hu­man ex­pe­rience.

I’ve no­ticed some peo­ple us­ing “spiritual” to de­scribe no­tions they con­sider aes­thet­i­cally sub­lime and morally up­lift­ing but not well un­der­stood, when they are not par­tic­u­larly mo­ti­vated to un­der­stand them, with­out any com­mit­ment to their be­ing su­per­nat­u­ral. This may be what you re­fer to in your sec­ond mean­ing, I’m not sure.

There is, of course, a lot of po­ten­tial over­lap here with su­per­nat­u­ral no­tions.

I can’t speak for any­one else, but in my case it’d re­fer to some­one who is an athe­ist and ma­te­ri­al­ist on­tolog­i­cally, but who finds aes­thetic re­ward and men­tal sta­bil­ity in cer­tain forms of rit­ual and nar­ra­tive ap­plied to rel­a­tively spe­cific do­mains of life (like holi­days, rites of pas­sage and other cul­turally and cog­ni­tively-sig­nifi­cant stuff, as long as it’s been vet­ted to strip out the more ob­vi­ous kinds of crazy­mak­ing and ir­ra­tional­ity such things can in­duce).

My im­pres­sion was that some­thing like that was in­tended. How­ever, this seems to be a con­fla­tion of differ­ent cat­e­gories. The nor­mal cat­e­gory that oc­curs in this sort of con­text is “not re­li­gious but spiritual” which seems to gen­er­ally mean peo­ple sort of like what you de­scribe but also who as­cribe to var­i­ous su­per­nat­u­ral en­tities (e.g. god, ghosts, spirits, maybe faeries). When given the choice be­tween “athe­ist” and some­thing like “no re­li­gion” or “none” such peo­ple will gen­er­ally not put down athe­ist. And such peo­ple look de­mo­graph­i­cally very differ­ent from athe­ists and ag­nos­tics. See e.g. this Pew study. My im­pres­sion is that the re­li­gion ques­tions were not phrased in a way that showed much fa­mil­iar­ity with the un­der­ly­ing de­mo­graph­ics or how such ques­tions are gen­er­ally phrased. In this par­tic­u­lar con­text that’s ok be­cause I sus­pect that there are a fair num­ber of peo­ple here who are athe­ist-but-spiritual un­der your defi­ni­tion but very few peo­ple here who would fall into the “not re­li­gious but spiritual” no­tion that is a sub­set of the nones in the gen­eral pop­u­la­tion.

I think some of the “pick one” op­tions were too broadly grouped, though any mul­ti­ple-choice is go­ing to be. I’d have preferred a “no prefer­ence” for “re­la­tion­ship style”, for ex­am­ple, and more poli­ti­cal op­tions. Also I’m not sure what counts as “par­ti­ci­pates ac­tively” in other groups—I’ve been a mem­ber of tran­shu­man­ism-re­lated groups for over a decade, for ex­am­ple, but am mostly a lurker; I did not check the box.

I would have been in­ter­ested in see­ing a ques­tion about in­volve­ment in offline ac­tivi­ties like lo­cal mee­tups, or par­ti­ci­pa­tion in IRC/​other LW venues.

It seems likely you’re go­ing to get skewed an­swers for the IQ ques­tion. Mostly it’s the re­ally in­tel­li­gent and the be­low av­er­age who get (pro­fes­sional) IQ tests—av­er­age peo­ple seem less likely to get them.

I pre­dict high av­er­age IQ, but low re­sponse rate on the IQ ques­tion, which will give bad re­sults. Can you tell us how many peo­ple re­spond to that ques­tion this time? (no. of re­sponses isn’t reg­istered on the pre­vi­ous sur­vey)

I think it would be more in­for­ma­tive to ask peo­ple to take one spe­cific on­line test, now, and re­port their score. With ev­ery­one tak­ing the same test, even if it’s mis­cal­ibrated, peo­ple could at least see how they com­pare to other LWers. Ask­ing peo­ple to re­mem­ber a score they were given years ago is just go­ing to pro­duce a ridicu­lous amount of bias.

To cal­ibrate a se­ri­ous IQ test, you need to test (1) many (2) ran­domly se­lected peo­ple in (3) con­trol­led en­vi­ron­ment; and when the test is ready, you must test your sub­jects in the same en­vi­ron­ment.

On­line cal­ibra­tion or even on­line test­ing fail the con­di­tion 3. Con­di­tions 1 and 2 make cre­at­ing of a test very ex­pen­sive. This is why only a few se­ri­ous IQ tests ex­ist. And even those would not be con­sid­ered valid when ad­ministered on­line.

And there is also huge prior prob­a­bil­ity that an on­line IQ test is a scam. So even if they would provide some ex­pla­na­tion of how they fulfilled the con­di­tions 1, 2, 3, I still would not trust them.

To cal­ibrate a se­ri­ous IQ test, you need to test (1) many (2) ran­domly se­lected peo­ple in (3) con­trol­led en­vi­ron­ment; and when the test is ready, you must test your sub­jects in the same en­vi­ron­ment.

If you have a test thus cal­ibrated, you can use it to eval­u­ate tests that can’t be cal­ibrated in the same way.

Here’s one that closely imi­tates Raven’s Pro­gres­sive Ma­tri­ces and claims to have been cal­ibrated with a sam­ple of 250,000 peo­ple: http://​​www.iqtest.dk/​​

Here’s an­other one: http://​​sifter.org/​​iqtest/​​ . I can’t find any men­tion of where the ques­tions came from or how it’s cal­ibrated, but it’s shorter and doesn’t re­quire Flash.

Nei­ther one asks for an e-mail ad­dress or any iden­ti­fy­ing in­for­ma­tion. They might be too easy for some on LW, but harder ones tend to cost money. As Viliam_Bur pointed out, any free on­line test’s val­idity is ques­tion­able, but the first one is ba­si­cally a di­rect copy of a “real” test, and nei­ther one has any ap­par­ent ul­te­rior mo­tive. Anec­do­tally, they were both within 10 points of each other and my “real” score.

The first test gave me a score a few points be­low that on the Mensa site I did a few years ago, but I gave up early on a few ques­tions (I had about 10 min­utes left when I finished).

One weird thing about it is that there were so many ques­tions based es­sen­tially on the same idea, which makes me think it would be pos­si­ble to have a test with not-too-much-worse ac­cu­racy but half as many ques­tions (un­less they in­tended to test ‘stamina’ as well—but I’d guess that that varies more for a same per­son de­pend­ing on how much they’ve slept re­cently than across peo­ple).

I tried the sec­ond one af­ter read­ing this and had similar re­sults: 118 on the first one (im­plau­si­bly low); 137 (stdev16) on the sec­ond one (sounds about right).

Though if I was tak­ing this more se­ri­ously I’d prob­a­bly have to weigh the facts that my kids were be­ing more dis­tract­ing when I took the first one, and I ate flaxseed shortly be­fore tak­ing the sec­ond one.

I took the first one un­der rea­son­ably good con­di­tions, and the sec­ond un­der about the same con­di­tions a lit­tle while af­ter­wards.

The first one seemed like a test of en­durance as much as any­thing—it was as though my abil­ity to fo­cus was run­ning out on the last ten ques­tions or so, and pos­si­bly as though it would have been some­what eas­ier if I’d been in bet­ter phys­i­cal con­di­tion.

Gen­eral ques­tion about that sort of puz­zle—how much can effort help with them? Can they be solved re­li­ably given more time (and prob­a­bly a chance to write down the­o­ries and guesses), or does in­spira­tion have to strike fairly quickly?

In­ter­est­ing ques­tion. On the first test, I went through many of them quickly—some of them ob­vi­ously pat­tern-matched to the same kind of a puz­zle—but also solved a num­ber by star­ing at them for a few min­utes, re­fus­ing to give in to my brain’s “I don’t see any pat­terns, this doesn’t make any frakking sense, can we do some­thing else now?”. I’m cer­tain given 10 or 20 more min­utes I’d have done bet­ter. And come out with a headache, prob­a­bly.

My eyes were hurt­ing af­ter the first test, and this con­tinued (less in­tensely, I think) into the sec­ond, even though read­ing on the mon­i­tor isn’t gen­er­ally a prob­lem for me. There may also be sen­sory is­sues in­volved in scores—I was run­ning into trou­ble any­way, but hav­ing to dis­t­in­guish be­tween very dark gray squares and black squares in one of the later puz­zles didn’t help. If I had more of a differ­ent sort of in­tel­li­gence, I would have thought of fid­dling with my mon­i­tor set­tings.

I’m in­clined to think that prac­tice/​in­for­ma­tion could help a lot with the puz­zles—hav­ing a reper­toire of pos­si­ble pat­terns is go­ing to make solu­tions eas­ier than try­ing to find pat­terns cold.

Pos­si­bly as a re­sult of not be­ing en­tirely pleased at that 107 score, I’m doubt­ing the whole premise of IQ test­ing—that it’s im­por­tant to find out what can’t be im­proved about peo­ple’s minds.

Part of this is the ar­ro­gance prob­lem—how com­plete is your knowl­edge of the pos­si­bil­ity of im­prove­ment, any­way?-- and the other part is won­der­ing whether all those re­sources could be bet­ter put into learn­ing how to im­prove what can be im­proved.

The other thing is that I’ve had some re­cent ev­i­dence that the ways the parts of the mind are in­ter­con­nected aren’t com­pletely ob­vi­ous. I’ve been do­ing some psy­cholog­i­cal work on fad­ing out self-ha­tred, and the re­sults have been be­ing less fright­ened about what I post (I de­cided be­fore tak­ing the IQ tests to post my scores, but there was still a bit of a pang), eas­ier and faster typ­ing—not tested, but I do seem some­what apt to write at greater length (this seems to be the re­sult of feel­ing less need to over-mon­i­tor so that typ­ing can be a low-level habit), less akra­sia (still pretty bad, but the de­sire to do things is hap­pen­ing more of­ten), and the abil­ity to walk down­stairs more eas­ily (I have some old knee in­juries which can be ame­lio­rated by bet­ter co­or­di­na­tion—but I haven’t been work­ing on co­or­di­na­tion).

In this type of test, I can solve gen­er­ally about all ex­cept about 4 of them al­most im­me­di­ately with some sec­onds of thought. I skip those few, then re­turn to them at the end, and in the min­utes that re­main man­age to make an ed­u­cated guess for say two of them, while hav­ing to leave two more to com­plete chance.

I don’t use my spa­cial skills in my daily work they way I used to use them in my daily school work, and both on­line tests seem to mea­sure only that.

I found the sec­ond test much more difficult—there wasn’t enough in­for­ma­tion to de­rive the ex­act miss­ing item, so you had to choose things that could be ex­plained with the sim­plest/​least rules. There were some where I dis­agreed that the cor­rect an­swer had a sim­pler rule-set. The prob­lem style is also highly learn­able, and I ques­tion the di­ag­nos­tic value of “figur­ing out” that you’re look­ing at a 3x3 ma­trix where op­er­a­tions oc­cur as you move around it, but var­i­ous cells have been ob­scured to make the prob­lem harder. Not in­clud­ing in­struc­tions makes it feel like there’s a se­cret hand­shake to get in.

Go­ing with the lower re­sult for the pur­pose of Yvain’s sur­vey. I found the sec­ond re­sult a lit­tle sus­pect be­cause a lot of ques­tions on the sec­ond test made lit­tle sense to me. I would of­ten see 2-3 pos­si­ble an­swers that made more or less equal (small) sense to me, and had to take a gut feel­ing guess on which the au­thor might have pos­si­bly meant.

Maybe I just got lucky. Or my gut is a bet­ter thinker than I sus­pected.

Got 135 on the first test. Got 139 on the Stan­ford-Binet/​USA scale (stdev 16) in the sec­ond. This seems about right.

But since the sec­ond one was po­lite enough to tell me which an­swers I got wrong, I have to call bul­lshit on it: some of the “cor­rect” an­swers it claimed made no sense, and seemed more wrong and illog­i­cal than the ones I had placed.

With ev­ery­one tak­ing the same test, even if it’s mis­cal­ibrated, peo­ple could at least see how they com­pare to other LWers.

There are two ways an IQ test can fail:
a) it can be mis­cal­ibrated;
b) it can mea­sure some­thing else than IQ.

If you only want to know your per­centile in LW pop­u­la­tion, (a) is not a prob­lem, but (b) re­mains. What if the test does not mea­sure the “gen­eral in­tel­li­gence fac­tor”, but some­thing else? It can partly cor­re­late to IQ, and partly to some­thing else, e.g. math­e­mat­i­cal or ver­bal skills.

Also you have a pre­s­e­lec­tion bias—some LWers will fill the sur­vey, oth­ers won’t.

Don’t for­get those of us who aren’t na­tive English speak­ers. Didn’t try it again re­cently, but I used to have a 5-10 points differ­ence be­tween an IQ test in French (my na­tive lan­guage) and English. Word-re­lated ques­tions are of course harder, but even for the rest, I’m not sure if it’s be­cause it took me longer to pro­cess the English (while the IQ is time-limited), or just that de­cod­ing a non-na­tive lan­guage use more brain power (leav­ing less for solv­ing the prob­lem). But any­way, I score bet­ter in my na­tive lan­guage than in English, and I an­swered with my score in na­tive.

I un­der­went a real IQ test when I was young, and so I can say that this es­ti­ma­tion sig­nifi­cantly over­shoots my ac­tual score. But that’s be­cause it fac­tors in test-tak­ing as a skill (one that I’m good at).
Then again, I’m also a lit­tle shocked that the table on that site puts an SAT score of 1420 at the 99.9th per­centile. At my high school there were, to my knowl­edge, at least 10 peo­ple with that high of a score (and that’s only those I knew of), not to men­tion one perfect score. This is out of ~700 peo­ple. Does that mean my school was, on av­er­age, at the 90th per­centile of in­tel­li­gence? Or just at the 90th per­centile of study­ing hard (much more likely I think).

If you’re in the me­dian age band for Less Wrong, you mis­read the es­ti­ma­tor. The “SAT to IQ” table is for the pre-1995 SAT, which had much more rar­efied heights. The “SAT I to IQ” table is for the 1995-2005 SAT.

And of course, there are also SAT prep ser­vices which offer guaran­tees of rais­ing your score by such and such an amount (my mother thought I ought to try work­ing for one, given my own SAT scores and the high pay, but I don’t want to join the Dark Side and work in fa­vor of more in­equal­ity of ed­u­ca­tion by in­come,) and these ser­vices are al­most cer­tainly not rais­ing their re­cip­i­ents’ IQs.

I’ve never taken an IQ test, so when I was re­sponded to the sur­vey I con­sid­ered es­ti­mat­ing my IQ based on my SAT and GRE scores. The re­sult, ac­cord­ing to the site torekp linked to, is sur­pris­ingly high (150+). I think I’m smart, but not that smart. Any­one have any idea if these es­ti­ma­tors should be trusted at all?

GRE quan­ti­ta­tive scores are not use­ful for high-IQ es­ti­mates be­cause 6% of peo­ple get perfect scores.

A perfect GRE ver­bal score is roughly the 99.8th per­centile, as can be in­ferred from the charts in this pdf: http://​​www.ets.org/​​Me­dia/​​Tests/​​GRE/​​pdf/​​994994.pdf It shows that the per­cent of peo­ple with a perfect scores varies be­tween less than 0.1% and 1.5%, de­pend­ing on field, but it is usu­ally 0.1% or 0.2%. (The 1.5% field was philos­o­phy.) Be­cause many non-na­tive English speak­ers take the test, it’s likely that one ought to ad­just that per­centile a bit lower.

That’s among peo­ple ap­ply­ing to grad school, which is a higher-IQ group than the gen­eral pop­u­la­tion, but not by so much that 99.8th per­centile among grad school ap­pli­cants cor­re­lates to the 99.996th per­centile among the gen­eral pop­u­la­tion, as that site (http://​​www.iq­com­par­i­son­site.com/​​GREIQ.aspx) claims. That would be im­pos­si­ble as­sum­ing more than one in fifty peo­ple in the ap­plies to grad school.

If we at­tribute a perfect GRE score to the 99.8th per­centile, then look­ing up that per­centile on the chart on the same page, we get an IQ score >142 for 1600 on the GRE.

I’ve only got the one data point, but my tested IQ is within a cou­ple points of what that site pre­dicts from my SAT score. I took the tests al­most a decade apart, though, so this could be co­in­ci­den­tal; scores for both tests aren’t that sta­ble over that kind of timeframe, I don’t think.

My IQ ac­cord­ing to the es­ti­ma­tor would put me in the 99.995th per­centile, but it seems to me that at least 5% of my friends and ac­quain­tances are at least as smart as me. Part of this is prob­a­bly se­lec­tion bias, but I doubt that could ac­count for it com­pletely. I don’t move in par­tic­u­larly ex­alted cir­cles.

EDIT: If you had asked me to es­ti­mate my IQ be­fore I con­sulted the web­site, I would have said 135. I’d prob­a­bly still say that, ac­tu­ally. I’m guess­ing the GRE-to-IQ con­ver­sion is use­less above some ceiling.

FYI, if you’re in the me­dian age band for Less Wrong, you mis­read the es­ti­ma­tor—I know, be­cause I made the same mis­take. Click­ing “SAT to IQ” on the left shows a table for the test prior to a re-cen­ter­ing in 1995, whereas “SAT I to IQ” shows the table for tests given be­tween 1995 and 2005. The lat­ter’s top end is much less ex­cep­tional.

My (limited) back­ground knowl­edge is that SATs, GREs, etc. are de­signed for peo­ple near the av­er­age, and give im­pre­cise re­sults for the high­est IQs. You’re prob­a­bly in that range the tests aren’t very good for.

I was won­der­ing if the IQ-cal­ibra­tion ques­tion was refer­ring to re­ported or ac­tual IQ. It seems to be the lat­ter, but the former would be much more fun to think about.

Also, are so many LWers com­fortable es­ti­mat­ing with high con­fi­dence that they are in the 99.9th per­centile? Or even higher? Is this com­mu­nity re­ally that smart? I mean, I know I’m smarter than the ma­jor­ity of peo­ple I meet, but 999 out of ev­ery 1000? Or am I just be­ing overly en­thu­si­as­tic in cor­rect­ing for cog­ni­tive bias?

I’d es­ti­mate with high con­fi­dence that I’m higher than that. Sub­jec­tively, I’ve only met a cou­ple of peo­ple in my life who seem definitely smarter than me. And I’ve barely met any­one who was mal­nour­ished or lack­ing in ed­u­ca­tion. That said, there is the “ev­ery­one else is stupid” bias.

ETA: In case it wasn’t clear from the out­set, on the out­side view, most peo­ple with this no­tion are wrong, and there’s a re­cur­sive prob­lem in jus­tify­ing that I’m spe­cial. But in­tel­li­gence tests, though im­perfect, are a good hint.

I’m not con­tra­dict­ing you at all, but I’m just cu­ri­ous: how do you know that you are smarter than vir­tu­ally ev­ery­one you meet? If there is any­thing more to it than an in­tu­ition, I’d love to know about it. I’ve always won­dered if there was some se­cret smart-per­son hand­shake that I wasn’t privy to.

Per­son­ally, I’d say the lower 80 or 90% im­me­di­ately iden­tify them­selves as such, but be­yond that I try to give oth­ers the benefit of the doubt. Maybe they aren’t in­ter­ested in the con­ver­sa­tion, don’t want to seem in­tel­li­gent, or or just plain out of my leauge. I don’t value hu­mil­ity very highly at all; but there aren’t many things that would con­vince me I or some­one else was demon­stra­bly in the top frac­tion of the top per­centile.

Also, I’ve been in­tu­itively aware of the op­ti­mism bias for as long as I can re­mem­ber, and es­ti­mates like ”.1% and 99.9%” trig­ger my skep­ti­cism mod­ule hard.

Per­son­ally, I’d say the lower 80 or 90% im­me­di­ately iden­tify them­selves as such, but be­yond that I try to give oth­ers the benefit of the doubt.

I’d agree with that state­ment, re­vis­ing it up to at least 95%. Once you’ve got it down to more than 19 in 20 peo­ple you meet be­ing ob­vi­ously-dumb, it’s worth the effort to in­spect the oth­ers more care­fully, since it’s always good hav­ing re­ally smart peo­ple around.

Also, I’ve been in­tu­itively aware of the op­ti­mism bias for as long as I can re­mem­ber, and es­ti­mates like ”.01% and 99.99%” trig­ger my skep­ti­cism mod­ule hard.

I’m much more fa­mil­iar with peo­ple think­ing 95% is an or­ders-of-mag­ni­tude higher es­ti­mate than 80%, and so I tend to ad­just oth­ers’ care­fully-thought-out es­ti­mates out­ward rather than in­ward, un­less they are 0 or 1.

ETA: It’s worth not­ing that one of the huge sig­nals smart peo­ple give off is the “OMG you’re talk­ing about some­thing that re­quires in­tel­li­gence I’m so happy to have met a smart per­son be­cause that hap­pens to me less than 5% of the time” re­ac­tion, which if rarer than I think would sig­nifi­cantly throw off my es­ti­mates.

Seem­ing “ob­vi­ously” dumb and ac­tu­ally not be­ing in the top 5% are very, very differ­ent. A per­son might just be tired, or stressed, or dis­tracted and so not ex­ude in­tel­li­gence. Or, they might be act­ing a lit­tle less in­tel­li­gent than they ac­tu­ally are, maybe for so­cial rea­sons.

For my­self I took my re­sult to the Mensa on­line pre-test, that I did for the pur­pose of cal­ibrat­ing my­self a few years ago. It’s not a fully pro­fes­sional test (and not done in test situ­a­tion), but I con­sider it valid enough to be more than pure noise.

Look­ing at the com­ments, it seems like I am not the only one who used the sur­vey as an im­pe­tus to cre­ate an ac­count or a first post. I would be in­ter­ested to see if there was a sig­nifi­cant in­crease in the num­ber of new ac­counts while the sur­vey is run­ning (as op­posed to the av­er­age num­ber of new ac­counts when there is no cur­rent sur­vey).

...Also I took the IQ test posted in the com­ments.. Yeah, it has me as a good 15 points lower than what I was tested as in school also.

About the prob­a­bil­ity ques­tions: I thought you were sup­posed to an­swer them in­stantly for your in­tu­itive stance at the mo­ment, with­out ad­di­tional re­search, though I see some of re­spon­ders ap­par­ently did re­search. Per­haps it should be bet­ter speci­fied what is meant.

My fam­ily is of mixed re­li­gious back­ground, so I just ar­bi­trar­ily used my mother’s re­li­gious back­ground for those ques­tions. You might want to make the an­swer choices a lit­tle more flex­ible.

Posted. It wasn’t clear whether the IQ cal­ibra­tion ques­tion was whether your IQ would be higher than the re­ported IQ of re­spon­dents or the ac­tual IQ of re­spon­dents, and also whether that in­cluded re­spon­dents that didn’t an­swer the IQ ques­tion.

Every­one should take the sur­vey be­fore read­ing any more com­ments, in case they con­tain an­chors etc.

I took the sur­vey. My es­ti­mates will be very poorly cal­ibrated (I haven’t done much in the way of cal­ibra­tion/​es­ti­ma­tion ex­er­cises) but I’m hop­ing they’ll at least be good enough for wis­dom-of-the-crowds pur­poses and more use­ful than just leav­ing blank.

Minor quib­ble: shouldn’t “p(xrisk)” be “p(NOT xrisk)”? Just wor­ried about peo­ple in a hurry not read­ing the ques­tion prop­erly.

I know “male, fe­male, FTM, MTF, other” is a stan­dard gen­der/​sex ques­tion, but I don’t know why. A prob­lem is that it im­plies that “FTM” is a dis­tinct cat­e­gory from, rather than a sub­set of, “male” (ditto for fe­male). This would be bet­ter if other ques­tions had an­swers that were sub­sets of other an­swers, but you seem to try hard not to do that. This could be fixed by phras­ing it as “cis male”, but then you’d get peo­ple com­plain­ing about “cis” and “trans” not be­ing a perfect di­chotomy and com­plain­ing about the con­fus­ing word and so on. This could also be fixed by split­ting the ques­tion into “gen­der (male/​fe­male/​other)” and “Are you trans? (yes/​no)”, but then you’d get other com­plaints.

I wouldn’t have been too far off on the New­ton ques­tion if I had been able to re­mem­ber the map­ping be­tween cen­tury num­ber­ing and year num­ber­ing. I ended up two cen­turies off. For­tu­nately I took that into ac­count when cal­ibrat­ing.

Also, for the record: I’m not “con­sid­er­ing cry­on­ics”. I’m cry­ocras­ti­nat­ing. Cry­on­ics is ob­vi­ously the best choice, and I should be sign­ing up for it in the next five sec­onds. I will prob­a­bly die while not signed up for cry­on­ics, and that will be death by stu­pidity, and you will all get to point and laugh at my corpse.

I know “male, fe­male, FTM, MTF, other” is a stan­dard gen­der/​sex ques­tion, but I don’t know why. A prob­lem is that it im­plies that “FTM” is a dis­tinct cat­e­gory from, rather than a sub­set of, “male” (ditto for fe­male).

I don’t think that im­pli­ca­tion cre­ates con­fu­sion in the mind of any­body an­swer­ing the sur­vey, i.e. most peo­ple know what to an­swer. It’s some­what de­bat­able whether it makes “more sense” to clas­sify a FTM trans­sex­ual as male be­cause of the gen­der role to which they iden­tify, or as fe­male be­cause of the chro­mo­somes they have, so sidestep­ping the whole ques­tion by us­ing four cat­e­gories seems like a rea­son­able solu­tion for a sur­vey (or at least, if I was do­ing a sur­vey, that’s why I’d use those four cat­e­gories).

Us­ing things like “cis male” might make the ques­tions more tech­ni­cally ac­cu­rate, but it won’t make any­body less con­fused about how to an­swer, and will prob­a­bly make some more con­fused.

FTM trans­sex­u­als usu­ally con­sider it offen­sive not to be clas­sified as men (ei­ther by be­ing clas­sified as non-men or by avoid­ing the ques­tion), though ar­guably we could take the stick out of our asses.

Un­less you ac­tu­ally do a kary­otype test on an in­di­vi­d­ual you don’t know what chro­mo­somes they have, and that can’t be in­ferred with cer­tainty from as­signed gen­der at birth, pri­mary or sec­ondary sex­ual char­ac­ter­is­tics, or similar macroscale traits. A non-neg­li­gable por­tion of the pop­u­la­tion have chro­mo­somes that don’t cor­re­spond to XX/​XY, and said anoma­lies do not re­li­able cor­re­late to a trans­gen­der iden­tity.

This could also be fixed by split­ting the ques­tion into “gen­der (male/​fe­male/​other)” and “Are you trans? (yes/​no)”, but then you’d get other com­plaints.

I was go­ing to raise ex­actly that is­sue and sug­gest that solu­tion. What com­plaints would you ex­pect, though? I don’t know if I’d re­ally ex­pect any non-trans LWers to be in­sulted at the mere sug­ges­tion that the ques­tion is worth ask­ing.

Also, for the record: I’m not “con­sid­er­ing cry­on­ics”. I’m cry­ocras­ti­nat­ing. Cry­on­ics is ob­vi­ously the best choice, and I should be sign­ing up for it in the next five sec­onds.

I don’t want to point and laugh at your corpse. Please im­ple­ment what you con­sider to be the ob­vi­ous best choice. If you don’t know how to get started, con­tact Rudi Hoff­man. He will walk you through the pro­cess. Get started to­day.

I know “male, fe­male, FTM, MTF, other” is a stan­dard gen­der/​sex ques­tion, but I don’t know why. A prob­lem is that it im­plies that “FTM” is a dis­tinct cat­e­gory from, rather than a sub­set of, “male” (ditto for fe­male).

Is that a stan­dard gen­der/​sex ques­tion? As some­one who’s been pro­gram­ming mar­ket re­search sur­veys for sev­eral years, I’ve never seen any­thing like it.

Yes, as some­one with no skin in the game, so to speak, I was nonethe­less un­com­fortable dis­clos­ing not just the gen­der “male” but also the ini­tial state of my gen­i­talia. What kind of per­son asks about a baby’s junk?

Yeah, that con­fused me too. What’s the point of ask­ing that ques­tion in the first place ? Just to col­lect more fea­tures for some clus­ter­ing model, or what ? Then why not ask peo­ple’s age or weight or hair color, as well ?

Yes, the ‘race’ ques­tion was par­tic­u­larly weird since it did not have refer­ence to the coun­try of ori­gin. Nor­mally, sur­veys con­ducted in differ­ent coun­tries have very differ­ent break­downs of what ‘race’ is sup­posed to mean.

At least it had both the Bri­tish and Amer­i­can ver­sions of “Asian”.

Yeah, I don’t think many peo­ple out­side North Amer­ica would break up White into His­panic and non-His­panic. (At least, it didn’t say “Lat­ino”—I didn’t find out what it’s sup­posed to mean un­til re­cently, and as a re­sult, be­ing Ital­ian, I had classed my­self as a Lat­ino a few times.)

The US Cen­sus Bureau uses this odd sys­tem for his­tor­i­cal/​poli­ti­cal rea­sons. I don’t think it re­flects very much how Amer­i­cans cat­e­go­rize the world. I don’t know why Yvain used it, I don’t think he’s even Amer­i­can.

The cry­on­ics ques­tion is bro­ken! I couldn’t an­swer it with­out sus­pect­ing it would be mis­lead­ing. My p would be in­cred­ibly low but only be­cause my p for the hu­man species sur­viv­ing is low. This is a tech­ni­cally cor­rect way to an­swer the ques­tion but I am not at all con­fi­dent that ev­ery­one else would an­swer liter­ally, in­clud­ing the ob­vi­ous con­sid­er­a­tion “if ev­ery­one else is dead, yeah, you die too”. Or, even if ev­ery­one did, I am not con­fi­dent that the ap­pro­pri­ate math would be done on a per-par­ti­ci­pant level in the re­sults for the p(cryo) to be mean­ingful.

I an­swered that ques­tion in­ter­pret­ing it liter­ally, even though “I’d as­sign prob­a­bil­ity 1% that a ran­domly-cho­sen per­son cry­op­re­served as of 1 Nov 2011 will be even­tu­ally re­vived” doesn’t im­ply “I think that ap­prox­i­mately 1% of the peo­ple cry­op­re­served as of 1 Nov 2011 will be even­tu­ally re­vived”, since the prob­a­bil­ities for differ­ent peo­ple are nowhere near be­ing un­cor­re­lated.

I gave a low prob­a­bil­ity, not be­cause I don’t think that re­viv­ing peo­ple is pos­si­ble, or dis­cov­er­able soon, but be­cause I see some poli­ti­cal trends to­day that I think are very likely to re­sult in mobs de­stroy­ing the fa­cil­ities be­fore we can be re­vived. (And even if that doesn’t hap­pen, sooner or later some coun­try is go­ing to use nan­otech in mil­i­tary ways, which—if the hu­man race sur­vives—may well re­sult in the en­tire field be­ing ei­ther banned or clas­sified and stay­ing that way.)

The defi­ni­tion of com­mu­nism is cer­tainly a straw man. It’s not sur­pris­ing that LWers don’t know the differ­ence be­tween Stal­inism, So­cial Democ­racy, and don’t know about Anar­chism at all, but I was still dis­ap­pointed.

Thought you might have in­cluded an op­tion for “re­ac­tionary” on the poli­ti­cal ori­en­ta­tion ques­tion. The dis­tinc­tion be­tween re­ac­tionary, and liber­tar­ian or con­ser­va­tive is sub­stan­tial even given the fact that the match isn’t sup­posed to be perfect.

The global warm­ing ques­tion might be more dis­crim­i­nat­ing if the ques­tion were whether some­one thinks that the main­stream view on AGW is sci­en­tifi­cally valid within rea­son. The ques­tion as it stands is vague, hing­ing on the in­ter­pre­ta­tion of “sig­nifi­cant”.

But who self-iden­ti­fies as a re­ac­tionary? That said, there are a num­ber of large holes in the poli­ti­cal ques­tion. A Left Anar­chist is go­ing to feel severely pissed off with hav­ing to choose be­tween state so­cial­ism and an­ar­cho cap­i­tal­ism.

I just took it. My is­sue, which I haven’t seen men­tioned yet, is with the use of “ag­nos­tic” as a mid­point on the scale be­tween the­ism and athe­ism. I re­al­ize that’s a com­mon col­lo­quial use now but I don’t get how it’s a mean­ingful cat­e­gory—un­less it’s meant to re­fer to nega­tive athe­ism, and the “athe­ism” an­swers re­fer to pos­i­tive athe­ism? And in the his­tor­i­cal use of “ag­nos­tic” I think it’s a sep­a­rate cat­e­gory al­to­gether that could over­lap with both athe­ism and the­ism.

Over­all I found the ques­tions very in­ter­est­ing though, and I’m cu­ri­ous to see the re­sults.

It makes sense if one means by “ag­nos­tic” not “can­not be known” but “I don’t know” or “I’m un­sure.” This makes sense in a gen­eral con­text and even more so in a a Bayesian con­text. In that con­text, one would have some­thing like the­ists mean peo­ple that P(God ex­ists) is high, athe­ists es­ti­mate that P(God ex­ists) is low, and ag­nos­tics are in the mi­drange.

To some ex­tent, but not ev­ery­one may have a spe­cific prob­a­bil­ity. And differ­ent peo­ple may out­line the spe­cific prob­a­bil­ities differ­ently. Ask­ing it as the­ist/​ag­nos­tic/​athe­ist also is im­plic­itly ask­ing about so­ciolog­i­cal, psy­cholog­i­cal, and episte­molog­i­cal norms at the same time due to the con­no­ta­tions of each of those terms.

I agree that it could be ask­ing about which la­bel peo­ple iden­tify with and how that re­flects those var­i­ous norms, and that would also be an in­ter­est­ing ques­tion—but in that case it should have been worded differ­ently, or there should have at least been an “other” cat­e­gory. The way it was pre­sented sug­gests an ex­haus­tive scale.

That’s how I felt. There is such thing as a per­sonal moral code or sys­tem, and we can ex­am­ine what hap­pens to groups of peo­ple who are run­ning var­i­ous types and mix­tures. We can try to de­ter­mine which moral memes have the best out­comes, and are most likely to spread and be ex­e­cuted closely, and we can try to fol­low those codes.

Maybe that’s prag­matic ethics, but the way moral­ity is used in the sur­vey im­plies that I’d be­lieve in a sin­gle cor­rect way of ex­e­cut­ing moral­ity at the in­di­vi­d­ual, day-to-day level. It’s like ask­ing whether I be­lieve in be­ing a car­nivore, an her­bivore, or a plant. The op­tion “other” op­tion is “moral­ity doesn’t ex­ist,” which is a bit like are you a) chris­tian, b) jew­ish, c) mus­lim, or d) re­li­gion doesn’t ex­ist.

Filled out. For the prob­a­bil­ity ques­tions that I thought were very close to 0 (or 100) I thought about how many times in a row I would have to see a fair coin land heads to have a similar level of cre­dence, and then trans­lated that into per­centages. A fun ex­er­cise.

I took the sur­vey. I’d re­ally have liked an “other/​no af­fili­a­tion” op­tion on the poli­tics ques­tion, though, or a finer-grained scale. I sup­pose I could just have left it blank, but that seems not to trans­mit the right in­for­ma­tion.

Yvain, one very im­por­tant ques­tion that I think you missed: Do you cur­rently have an ac­count on Less­wrong?

I per­son­ally don’t, and glanc­ing through the num­ber of ‘first post’ com­ments here, I be­lieve that the ra­tio of lurk­ers to ac­tive users may be sig­nifi­cant. (This is a throw­away ac­count, and I am mak­ing an ex­cep­tion this once be­cause there would be no other way to get in­for­ma­tion from the lurk­ers.)

I’m not sure what it is about a sur­vey that gets me to stop lurk­ing at a com­mu­nity and ac­tu­ally cre­ate an ac­count, but there you have it. Maybe it’s just the chance to tell my ‘story’ anony­mously.

Like sev­eral other peo­ple, I was a bit both­ered by the P(God) type ques­tions. For some of those, my be­lief de­pends on an ar­gu­ment for the im­pos­si­bil­ity of, say, God, rather than on any par­tic­u­lar ev­i­dence. In that case, am I sup­posed to take into ac­count my un­cer­tainty as to the val­idity of my ar­gu­ment? Or just put 0?

How do you dis­t­in­guish be­tween
1) a uni­verse wherein a gen­uinely om­nipo­tent agent is im­pos­si­ble, and
2) a uni­verse with a gen­uinely om­nipo­tent agent who makes it seem like a gen­uinely om­nipo­tent agent is im­pos­si­ble?

It’s not so much the “gen­uinely om­nipo­tent” bit that I have philo­soph­i­cal prob­lems with as the idea of “on­tolog­i­cally ba­sic men­tal en­tities”. I don’t think this is the place to go into it fully, but suffice it to say that nowa­days I’m not sure if that even makes sense. If I don’t think a situ­a­tion makes sense, how can I as­sign it a prob­a­bil­ity?

Of course, I could weigh that against the prob­a­bil­ity that I’m mis­taken, but I’m not sure whether we’re meant to take that kind of thing into ac­count.

The only way I’ve found is to at­tack the idea of om­nipo­tence on the ba­sis of logic. If the ques­tioner is al­lowed to in­sist I “con­sider the pos­si­bil­ity of a uni­verse where logic isn’t valid,” I can only dis­miss his ques­tion as non­sense.

Yeah, I wasn’t sure how to in­ter­pret the God ques­tion ei­ther. If asked, I ad­mit the pos­si­bil­ity of a “cre­ator be­ing” that is not su­per­nat­u­ral (in Car­rier’s sense). But that op­tion wasn’t in the sur­vey as far as I could tell.

Only an Amer­i­can could have writ­ten some­thing like that… Poli­ti­cal “ide­olo­gies” ap­par­ently do not trans­late be­tween coun­tries in any way. It’s like ask­ing Mus­lims if they feel closer to Catholics or Luther­ans.

The test has also a prob­lem with ex­tremely low “prob­a­bil­ity” events like “God ex­ist­ing”. There’s re­ally no mean­ingful num­ber be­tween a vague “the­o­ret­i­cally pos­si­bly just ex­tremely un­likely” (and num­ber of 0s you put there doesn’t re­ally mean any­thing) and “liter­ally im­pos­si­ble 0%” here.

Also Mold­bug­gians (there are bound to be a few con­sid­er­ing so many LWers read Un­qual­ified Reser­va­tions) will be sad­dened one can’t put Ja­co­bite /​ neo­came­ri­al­ist /​ restora­tionist /​ re­ac­tionary in there.

Scan­d­i­na­vian coun­tries (+ UK and Nether­lands, which seem to cluster closer with them than with the rest of EU) top most in­dexes of “eco­nomic free­dom” /​ “ease of do­ing busi­ness” etc. And they still have monar­chies over there, with state-church sep­a­ra­tion hap­pen­ing only re­cently, or not yet. And Swe­den has large pri­vate school sys­tem etc.

Or they have huge taxes, very com­pre­hen­sive welfare state sys­tem, al­low gay mar­riage or some other type, have a lot of out of wed­lock mar­riage, ex­tremely high rate of women par­ti­ci­pa­tion in work­force etc.

Depend­ing on which fea­tures you fo­cus on, you can make them ap­pear “ex­tremely liberal”, or “ex­tremely con­ser­va­tive” by US met­ric. It will be stupid cat­e­go­riza­tion ei­ther way.

Scan­d­i­na­vian coun­tries top the in­dexes on met­rics other than tax­a­tion, gov­ern­ment spend­ing and “labour free­dom” while the monar­chs (and ar­guably, the churches) are mainly if not solely sym­bolic. If la­bels are ig­nored I think “so­cially per­mis­sive, high taxes, ma­jor re­dis­tri­bu­tion of wealth” de­scribes these coun­tries very well.

Poli­tics is sim­ply in­com­pa­rable be­tween coun­tries. Usu­ally var­i­ous par­ties are clus­tered around some coun­try-spe­cific con­sen­sus, and dis­tance be­tween main­stream par­ties within a coun­try is much smaller than dis­tance be­tween con­sen­sus cen­ters be­tween coun­tries or even across time. Nei­ther po­si­tions nor even is­sues are similar.

You may as well ask in sur­vey if some­one is pro-EU or anti-EU. Most peo­ple in Europe have some opinion about it, and in many coun­tries it’s a ma­jor area of con­tention, but ask­ing non-Euro­peans about it is quite ridicu­lous.

I don’t think the for­eign policy is any­where near as im­por­tant as the other two: for ex­am­ple, most peo­ple are sel­dom di­rectly af­fected by it. And in small, neu­tral coun­tries such as Switzer­land such an axis would be nearly mean­ingless.

That’s the ex­act same ar­gu­ment as the other peo­ple say­ing the poli­ti­cal ideas of So­cial­ist/​Liberal/​Liber­tar­ian is com­pletely de­pen­dent on coun­try. That doesn’t have any­thing to do with For­eign Policy.

It doesn’t con­tain the for­eign policy axis (and the “fis­cally liberal/​con­ser­va­tive” is named “eco­nomic left/​right”, which is less am­bigu­ous than “liberal/​con­ser­va­tive”).

Some peo­ple also in­clude a differ­ent “poli­ti­cally au­thor­i­tar­ian/​liber­tar­ian” axis, differ­ent from the “so­cially au­thor­i­tar­ian/​liber­tar­ian” (which does make sense, for ex­am­ple Cuba nowa­days is very liberal so­cially speak­ing, but not so much poli­ti­cally speak­ing), but the Com­pass doesn’t, it keeps it sim­ple down to two axis.

FWIW, I’ve just taken the test for the umpteenth time, and I score Eco­nomic Left/​Right: −5.38, So­cial Liber­tar­ian/​Author­i­tar­ian: −5.13. (Through the years I’ve always been in the south­west­ern quad­rant, but when I was younger I used to be a lit­tle bit north­west of where I’m now.)

For the prob­a­bil­ity ques­tions, I think it might have been use­ful for peo­ple to be able to spec­ify con­fi­dence in their es­ti­mate. An es­ti­mate of X% from some­one who is fa­mil­iar with al­most all of the rele­vant ar­gu­ments and ev­i­dence is differ­ent from an es­ti­mate of X% by some­one with only a cur­sory un­der­stand­ing of the is­sue. Then we can tar­get the sub­jects peo­ple are most un­cer­tain about to pro­duce the most in­for­ma­tive dis­cus­sions.

A good bayesian way to make that ques­tion quan­ti­ta­tive would be, “If we ask you again in 10 years, how much do you ex­pect your num­ber to change? Ex­press your an­swer as a fac­tor of the per­centage or the in­verse per­centage, whichever is smaller. So 1 would mean you ex­pect no change, and 3 would mean you ex­pect, with about 50% con­fi­dence, that your es­ti­mate and its in­verse will both be more than a third and less than triple of what they are to­day.”

I know that it should re­ally be a mat­ter of p(1-p) but that’s close enough.

If I ex­pect that my es­ti­mate will change in the fu­ture, why not change it now? I grant that it is highly likely that my es­ti­mates will change, but I don’t know whether any par­tic­u­lar es­ti­mate will change up­ward or down­ward, so for now they stay put.

I sup­pose what an­ti­ci­pa­tion of change in a prob­a­bil­ity es­ti­mate prac­ti­cally means is that you ex­pect new pieces of ev­i­dence to come in and that you have a fairly good idea what the mag­ni­tude of ev­i­dence will be, just not the sign.

I filled out the sur­vey, but I left a num­ber of ques­tions blank, on the ba­sis that I don’t feel qual­ified to an­swer them. I would have left the year of sin­gu­lar­ity ques­tion blank too, but it said that do­ing that meant I thought it definitely wouldn’t hap­pen.

I would prob­a­bly be an N, but I’d need a bet­ter defi­ni­tion of “sin­gu­lar­ity”. In fact, I think the ques­tion would be gen­er­ally more in­ter­est­ing if it were split into three: su­per­hu­man AI, AI which self im­proves with moore’s law or faster, and AI dom­i­na­tion of the phys­i­cal world at a level that would make the differ­ence be­tween chim­panzee tech­nol­ogy and hu­man tech­nol­ogy small. All three of these could be ex­pressed as prob­a­bil­ity of it hap­pen­ing be­fore 2100, be­cause such a prob­a­bil­ity should still have enough in­for­ma­tion to let you mostly dis­t­in­guish be­tween a “not for a long time” and a “never”.

I would prob­a­bly be an N, but I’d need a bet­ter defi­ni­tion of “sin­gu­lar­ity”. In fact, I think the ques­tion would be gen­er­ally more in­ter­est­ing if it were split into three: su­per­hu­man AI, AI which self im­proves with moore’s law or faster, and AI dom­i­na­tion of the phys­i­cal world at a level that would make the differ­ence be­tween chim­panzee tech­nol­ogy and hu­man tech­nol­ogy small. All three of these could be ex­pressed as prob­a­bil­ity of it hap­pen­ing be­fore 2100, be­cause such a prob­a­bil­ity should still have enough in­for­ma­tion to let you mostly dis­t­in­guish be­tween a “not for a long time” and a “never”.

Hm… maybe I am a con­se­quen­tial­ist, af­ter all. But I try hard not to think of peo­ple as good or bad. What the Good Sa­mar­i­tan did was a good thing, be­cause it helped the vic­tim. And of course peo­ple with a char­i­ta­ble and benev­olent na­ture will tend to do good things more of­ten, as will those who fol­low good moral edicts.

Haha, I don’t know. Given that I was just in­tro­duced to it, I don’t know even re­ally know the ar­gu­ments for/​against. I’ve so far only come up with ar­gu­ments in my head, and they point me to­ward de­on­tol­o­gist.

Good idea, and a good set of ques­tions. How­ever, while I might say I’m fairly knowl­edge­able about a few top­ics any­where else, the feel­ing of go­ing far out of my depth is one I as­so­ci­ate strongly with LW. As an ex­am­ple, I would ex­pect the list of those who could hold a heavy AI dis­cus­sion with LW’s res­i­dent ex­perts to be about 5 peo­ple.

Also, “ex­ists” when refer­ring to the en­tire ob­serv­able uni­verse, makes me a bit tense. In our past light cone? In our fu­ture light cone? In a spacelike in­ter­val? It makes a big differ­ence.

I think the phras­ing there will prob­a­bly cause weird effects. For ex­am­ple, it seems most LWers have only vague ideas of biol­ogy and medicine, and I can talk con­fi­dently with a biol­ogy re­searcher or physi­cian of av­er­age abil­ity, so I felt happy check­ing that box. If ev­ery­one rea­sons like me, we’ll see lots of checks in that box, not be­cause peo­ple here are ex­pert in biol­ogy and medicine, but be­cause we aren’t.

Alright, I fi­nally made an ac­count. Thanks for the push, though this had lit­tle to do with why I’ve joined. I liked the prob­a­bil­ity parts of the sur­vey, though I know I need to im­prove my es­ti­mates. Poli­ti­cal sec­tion might be bet­ter done with a full-fledged Ques­tion sec­tion just de­voted to it. Per­haps a later sur­vey? I can’t wait to see the re­sults.

Took it. Though I had a hard time an­swer­ing what re­li­gion my fam­ily would abide to, my dad is an ag­nos­tic I think, but I’m not even sure what my mother be­lieves in . . .
No one I know very well prac­tice re­li­gion (not just be­liev­ing) ei­ther so it has never been a big part of my life, might be be­cause I’m from Swe­den.

I took the sur­vey and could feel my af­fec­tive heuris­tics gen­er­at­ing ran­dom near-the-bal­l­park num­bers.

Given I am a math­e­mat­i­cian and have no idea how to ac­tu­ally com­pute any of those prob­a­bil­ities (or what that would even for­mally mean, say in a prob­a­bil­ity mea­sure space), I let those num­bers stand with­out fur­ther scrutiny.

I took it as well. One com­ment: my mother and father ad­here(d) to differ­ent flavours of Chris­ti­an­ity in differ­ent de­grees. This made it some­what hard to an­swer that ques­tion fully (I went with my father be­cause he cares most, but my mother’s views prob­a­bly had more in­fluence on me.)

The poli­ti­cal sec­tion is beg­ging for a one line write in, se­ri­ously. Please con­sider adding on in ad­di­tion to the pick one op­tion poll. I’m not hav­ing warm fuzzies for any of the groups and had to bite my tongue and pick one I re­ally re­ally dis­like, just be­cause the al­ter­na­tives are so much worse and one of the al­ter­na­tives, while prob­a­bly quite pop­u­lar a choice, will be mis­in­ter­preted if I chose it.

From your per­spec­tive, that makes sense. From my per­spec­tive—I don’t in­tend to ever look at this data. I’m go­ing to im­port it into SPSS, have it crunch num­bers for me, and come out with some re­sult like “Less Wrong users are 65% liber­tar­ian” or like “Men are more likely to be so­cial­ist than women.”

If you put “other”—and this ap­plies to any of the ques­tions, not just this one—you’re pretty much wast­ing your vote un­less some­one else is go­ing to sift through the data and be in­ter­ested that this par­tic­u­lar anony­mous line of the spread­sheet be­lieves in strong en­vi­ron­men­tal pro­tec­tion but an oth­er­wise free mar­ket.

Look­ing at the an­swers, I re­ally shouldn’t have al­lowed write-ins for any ques­tions—I was kind of sur­prised how many peo­ple can’t set­tle on a spe­cific gen­der, even though the aim of the ques­tion was more to figure out how many men ver­sus women are on here than to judge how peo­ple feel about so­ciety (I con­sid­ered say­ing “sex” in­stead, but that has its own pit­falls and wouldn’t have let me get the trans­gen­der info as eas­ily. I’ll do it that way next time.)

I was par­tic­u­larly harsh on the poli­tics ques­tion be­cause I know how strong the temp­ta­tion is. I think next sur­vey I’ll give ev­ery ques­tion an “other” check box, but it will liter­ally just be a check box and there will be no room to write any­thing in.

If un­sure, se­lect “Yes” if you are phys­i­cally male and “No” if you are
phys­i­cally fe­male. If you have had SRS, please re­spond for your sex at
birth. This ques­tion is rele­vant to the ge­net­ics of col­or­blind­ness.

Tech­ni­cally, isn’t it the num­ber of X chro­mo­somes that mat­ters to col­or­blind­ness? It’s just that peo­ple with Y chro­mo­somes al­most always have one X chro­mo­some, and peo­ple with­out them al­most always have two.

You’re cor­rect; we asked for Y chro­mo­somes rather than X chro­mo­somes be­cause it’s way eas­ier to have an ex­tra X and not know it than to have a Y and not know it. So if we ask about Y, we can rough-sort into “prob­a­bly XY” and “prob­a­bly XX” groups and then look at the statis­tics for chro­mo­so­mal de­vi­a­tions within those groups.

Most peo­ple don’t ac­tu­ally know their kary­otype, and are of­ten sur­prised to learn that it’s not always what you as­sume. You can’t nec­es­sar­ily in­fer chro­mo­somes from ex­ter­nal ap­pear­ance and self-iden­ti­fi­ca­tion re­li­ably; you have to look at the ac­tual chro­mo­somes to be sure.

Look­ing at the barr bod­ies is not a kary­otype test. A test that can’t de­tect whether or not some­one is not XX/​XY suffi­cient to ac­tu­ally tell you the in­for­ma­tion you need to know your chro­mo­some type.

Yes, in terms of strict prob­a­bil­ity most peo­ple will be one of those. The test of the method is how well it han­dles edge cases (not at all); this is of con­sid­er­ably greater im­por­tance when you’re talk­ing about those edge cases.

Also, reread­ing that ex­pla­na­tion, I’m an­noyed at how I worded it. It’s okay, but my trans*-in­clu­sive vo­cab­u­lary has im­proved since then and I could do bet­ter. Hell, just “if un­sure, se­lect ‘yes’ if you were born with a pe­nis” would have been suffi­cient.

Fair point. I’m not sure ei­ther; I think I’m rely­ing on a given in­di­vi­d­ual who is e.g. in­ter­sex ei­ther a) know­ing that, and be­ing able to make a bet­ter-ed­u­cated guess about their chro­mo­somes than any heuris­tic I offer, or b) not know­ing that, which I’m will­ing to as­sume cor­re­lates well to hav­ing gen­i­tals that ei­ther do look like a pe­nis or don’t.

Per­haps the poli­tics ques­tion would be bet­ter phrased nega­tively:

On a scale of zero to ten, how much do you de­spise each of the fol­low­ing poli­ti­cal ide­olo­gies? If you en­dorse an ide­ol­ogy, put zero. If you very mildly de­spise it, put one. If your life’s fo­cus is to ex­punge it from the world, put 10. You must give each ide­ol­ogy a unique rank­ing.

All you have to care about is the low­est num­ber, and any­one who wants to do more with the num­bers is able to. Peo­ple would be less in­clined to com­plain about cul­tural fo­cus or bal­ance is­sues.

I sec­ond that idea, but even then the cul­tural fo­cus/​bal­ance is­sues will re­main when a word and a “defi­ni­tion” are given in a way that ap­pears to be a straw­man or a very US-cen­tric view of things. Maybe re­move the words (“liber­tar­ian”, “so­cial­ist”, …) and just give the one-sen­tence defi­ni­tion ?

What peo­ple pri­mar­ily seem to want is a more di­verse list. In­creas­ing the word count per en­try makes that less fea­si­ble. As one source of com­plaint is, as you im­ply, the link­ing of a term with a de­scrip­tion, what if de­scrip­tions were elimi­nated all to­gether?

I could be­gin a poli­ti­cal sur­vey dis­cus­sion post ask­ing peo­ple to PM me a one to three word de­scrip­tion of a view they en­dorse or al­most en­dorse, as well as an­other view they think im­por­tant. I would up­date the main page to re­flect sub­mis­sions so more of the same wouldn’t be sub­mit­ted. Then the poli­ti­cal ide­ol­ogy list could be trimmed down a bit some­how, and peo­ple could do a de­spise-style sur­vey in which they ex­press their dis­ap­proval of each.

As the pre­vi­ous LW sur­vey had about 150 tak­ers, I would ex­pect about that many peo­ple go­ing through the trou­ble of send­ing me sub­mis­sions, and many would be re­dun­dant, and per­haps by con­sen­sus or fiat a rep­re­sen­ta­tive list of 35 or so could be set for the sur­vey. Would that be a rea­son­able num­ber of one or three word phrases to scan? It would be an or­der of mag­ni­tude more effort to read that many poli­ti­cal sen­tences.

The de­spise sur­vey might re­veal in­ter­est­ing things that the ap­proval one did not—for ex­am­ple, we might find we have many tran­shu­man­ists that dis­like liber­tar­i­anism and monar­chism, and hate ev­ery­thing else. Or meta-con­trar­ian peo­ple who ap­prove of cur­rently pop­u­lar move­ments and no fringe ones. I don’t know.

I fear elimi­nat­ing the de­scrip­tions would lead to even more prob­lems, since words like “liber­tar­ian”, “so­cial­ist” or “com­mu­nist” don’t mean the same de­pend­ing of your cul­tural back­ground. I would have an­swered the ques­tion differ­ently if the de­scrip­tions were not given, and I don’t think I’m the only one.

Or maybe, could we just ask for Poli­ti­cal Com­pass score ? Would be a straight-for­ward ques­tion and easy to ex­ploit later on, even if a bit car­i­cat­u­ral. And if peo­ple don’t want to take the full Poli­ti­cal Com­pass test, they can still say roughly where they stand on the two axis.

Hav­ing read it, I re­al­ise this post may seem or be overly crit­i­cal. Oh well.

But what the re­sults will ac­tu­ally show, if 65% of peo­ple pick liber­tar­ian, is that 65% of peo­ple Iden­tify with liber­tar­i­anism more than the other op­tions. This is ob­vi­ously pos­si­ble wthout be­ing a liber­tar­ian. One could even just hate liber­tar­i­anism slightly less than the other op­tions and iden­tify most with it. As well as peo­ple who’s poli­ti­cal views aren’t well deli­ni­ated by any op­tion, there are a few peo­ple who are apoli­ti­cal and would have to just pick at ran­dom. or one could be forced to ham­mer a square peg into a round hole. Mul­ti­ple choice and no choice for “none of the above” for some­thing like this means ham­mer­ing square pegs into round holes or ab­stain­ing if you don’t strongly lean one way or an­other. if you think you’ll put a box for other in next sur­vey why not put it in this sur­vey? even an un­counted other op­tion al­lows peo­ple who’d rather have their choice not count than be iden­ti­fied with one of the op­tions given not to add a tally to one and gives you the num­ber of peo­ple with that prefer­ence which is in­ter­rest­ing in it­self.

The rest of this post is ideas for minor mod­ifi­ca­tions to word­ing.

Can’t you just change it to “sex” now?

“With what race or eth­nic group do you most closely iden­tify?” Some peo­ple might iden­tify most closely with a race other than their own. I don’t think the in­tent is to al­low for this but un­til I read the post this is a re­ply to, if I did iden­tify with an­other race more strongly than my own i’d an­swer that way were i to fill out the sur­vey. Maybe just ask what op­tion best de­scribes or ap­prox­i­mates your race.

maths might be the field of a non-triv­ial per­centage of less wrong read­ers.

I think mar­tial arts would go along nicely with self help, pickup artistry and med­i­ta­tion as an op­tion for the com­mu­ni­ties ques­tion. All are rel­a­tively com­mon self-im­prove­ment things, as is less wrong. Also I think mem­bers of com­petive game­ing (card games, board games, video games, any­thing i’ve missed) com­mu­ni­ties would be over­rep­re­sented on less wrong.

Ex­per­tise ques­tion. The bar set for “fairly knowl­edgable” here might be a lit­tle high. I think even some­one with an un­der­grad­u­ate de­gree in maths or physics might be out of their depth in heavy dis­cus­sion with an ex­pert. Maybe change heavy to light or re­move the qual­ifier.

Se­ri­ously, dude, cod­ing. Surely some­one would be will­ing to vol­un­teer to code a cou­ple hun­dred open-ends. It should take like 5 min­utes if you’re will­ing to use broad brush­strokes. And if most of the raw data is made pub­lic, the later sift­ing for in­ter­est­ing tid­bits is crowd­sourced.

Well, sure, you could do that. But if I de­cided to hand-code all of the poli­ti­cal write-ins into stan­dard poli­ti­cal terms like “liberal”, “con­ser­va­tive”, “etc”, then all I’d end up with is a list of peo­ple’s poli­ti­cal prefer­ences in a few bins of stan­dard poli­ti­cal terms.

Which is ex­actly what I have now when I don’t al­low write-ins. This way is eas­ier for me and al­lows peo­ple to choose their bin them­selves rather than have me try to guess whether some com­pli­cated philos­o­phy is more con­ser­va­tive than liber­tar­ian or vice versa.

From your per­spec­tive, that makes sense. From my per­spec­tive—I don’t in­tend to ever look at this data. I’m go­ing to im­port it into SPSS, have it crunch num­bers for me, and come out with some re­sult like “Less Wrong users are 65% liber­tar­ian” or like “Men are more likely to be so­cial­ist than women.”

If you put “other”—and this ap­plies to any of the ques­tions, not just this one—you’re pretty much wast­ing your vote un­less some­one else is go­ing to sift through the data and be in­ter­ested that this par­tic­u­lar anony­mous line of the spread­sheet be­lieves in strong en­vi­ron­men­tal pro­tec­tion but an oth­er­wise free mar­ket.

Virtue Ethics got in the cur­rent poll be­cause it was a com­mon enough write in by posters. I con­sider the write in op­tion to be use­ful in some spots be­cause that way one can figure out if one is miss­ing cer­tain com­mon clusters.

I am quite will­ing to bet that some poli­ti­cal cat­e­gories that are rare or fringe el­se­where may be promi­nent on Less­wrong, sim­ply be­cause high IQ peo­ple are more lik­ley to try and con­sis­tently con­form to a par­tic­u­lar ide­ol­ogy than low IQ peo­ple. I mean Liber­tar­ian and Com­mu­nist are (de­pend­ing on the coun­try) ba­si­cally such ex­otic po­si­tions, imag­ine some­one mak­ing a poll not ex­pect­ing to find sig­nifi­cant num­bers of ei­ther on Less­wrong.

How ex­actly could he figure this out and add those two? Oh sure on a differ­ent fo­rum, peo­ple might just say, well I’m X-terian and a lot of other peo­ple are or some­thing to that effect, but that seems a pretty rude thing to a LWer with our poli­tics taboo. I for one don’t want to know what any par­tic­u­lar poster’s ide­olog­i­cal lean­ings are! In­for­ma­tion is always good but our brain is liter­ally built to be hi­jacked by such in­for­ma­tion.

I wasn’t and still am not sure what “Virtue Ethics” is sup­posed to mean. My per­sonal ethics are based on the liber­tar­ian “non-ag­gres­sion prin­ci­ple,” in other words, don’t vi­o­late the rights of other per­sons, and be­yond that, do what­ever you want. (Which does not mean I don’t see a point to char­ity—I just see char­ity as one of many things you might do with your money or time be­cause it makes you happy. In my ex­pe­rience, enough peo­ple feel that way that it’s rare for any­one to starve or freeze un­less he be­haves so badly that he doesn’t de­serve to be helped.)

Apolo­gies if this vi­o­lates a poli­tics ban, but I can’t re­ally an­swer an ethics ques­tion with­out go­ing there.

As far as the ob­jec­tive “ex­is­tence” of morals: it’s a mean­ingless idea. Even if there is just one God, his opinion doesn’t au­to­mat­i­cally be­come The Truth any more than yours or mine does.

Ul­ti­mately, morals/​ethics are a mat­ter of taste and noth­ing more. But they’re a unique ex­cep­tion to the old saw “there’s no ac­count­ing for taste” be­cause your moral code de­ter­mines whether you can be trusted (to do any par­tic­u­lar thing some­one else ex­pects of you, a ques­tion that of course de­pends on who and what it is).

My per­sonal ethics are based on the liber­tar­ian “non-ag­gres­sion prin­ci­ple,” in other words, don’t vi­o­late the rights of other per­sons, and be­yond that, do what­ever you want.

This would be de­on­tolog­i­cal: you are eth­i­cal if you are fol­low­ing the rules.

Per my un­der­stand­ing of it, virtue ethics looks to the traits of the in­di­vi­d­ual moral agents. It is good to be a com­pas­sion­ate per­son. A com­pas­sion­ate per­son is more likely to give to char­ity, and so giv­ing to char­ity may be in­dica­tive of virtue, but a per­son is eth­i­cal for be­ing com­pas­sion­ate, not for the act it­self.

I just see char­ity as one of many things you might do with your money or time be­cause it makes you happy. In my ex­pe­rience, enough peo­ple feel that way that it’s rare for any­one to starve or freeze un­less he be­haves so badly that he doesn’t de­serve to be helped.

My per­sonal ethics are based on the liber­tar­ian “non-ag­gres­sion prin­ci­ple,” in other words, don’t vi­o­late the rights of other per­sons,

You’re de­scribing a de­on­tolog­i­cal branch of ethics, I think.

As for virtue ethics, I be­lieve virtue ethi­cists eval­u­ate the moral­ity of a deed based on whether it en­no­bles or de­bases the doer. In short, “char­ity is good” be­cause it in­stills to you habits of char­ity that makes you a bet­ter per­son. But per­haps a virtue ethi­cist would be bet­ter fit to ex­plain it (and my apolo­gies to them if I got it wrong).

You’ve taken a suffi­ciently co­her­ent poli­ti­cal philos­o­phy and pressed it into ser­vice as a moral philos­o­phy, where it doesn’t fit. The prin­ci­ple “do not harm” doesn’t im­ply that you should (may?) give to char­ity be­cause it makes you feel good. It only im­plies the con­verse, that you should give to char­ity if it makes you feel good.

But [Edit: one] pur­pose of a moral the­ory is to tell you when (if ever) to give to char­ity (and what char­ity to give to, etc.)

Okay, first things first: my ini­tial re­ac­tion to a cer­tain line in your com­ment was a re­flex­ive down­vote, but af­ter a minute I re­con­sid­ered; ap­ply­ing the prin­ci­ple of char­ity, it’s more likely that I’ve mis­in­ter­preted you than that you ac­tu­ally meant what I found ridicu­lous. So, to clar­ify:

In my ex­pe­rience, enough peo­ple feel that way that it’s rare for any­one to starve or freeze un­less he be­haves so badly that he doesn’t de­serve to be helped.

Surely, surely you are not blam­ing the vic­tims of star­va­tion?

Also, sec­ondly:

I wasn’t and still am not sure what “Virtue Ethics” is sup­posed to mean.

WP has an okay sum­mary, but the short ver­sion is: an act is moral or not based on the char­ac­ter and in­ten­tions of the ac­tor. It sounds like your ethics are rather more de­on­tolog­i­cal (i.e. rule-based).

If you put “other”—and this ap­plies to any of the ques­tions, not just this one—you’re pretty much wast­ing your vote

I dis­agree; it might be im­por­tant to iden­tify one­self as some­thing which is not one of the pre­sented op­tions, even if no one cares what other thing you are. For ex­am­ple …

I was kind of sur­prised how many peo­ple can’t set­tle on a spe­cific gen­der, even though the aim of the ques­tion was more to figure out how many men ver­sus women are on here

… I’m gen­derqueer, and when I take de­mo­graphic sur­veys it’s im­por­tant to me that I’m not counted in ei­ther the “men” or the “women” group. Firstly, it would be ly­ing, and sec­ondly, it would be ly­ing in a way which per­pet­u­ates the in­visi­bil­ity of my ac­tual iden­tity. That may not be a big deal to the sur­vey writer, but it’s always a big deal to me.

Ul­ti­mately, the ques­tion be­comes how you will in­ter­pret the differ­ence be­tween no-an­swer and check­ing a par­tic­u­lar box. If no an­swer by con­ven­tion means “I don’t know the an­swer to this ques­tion,” then it makes sense to have a “I know the an­swer, but it’s none of the choices you give” box (aka “other”). It may also make sense to have a “I know the an­swer, but it’s more than one of the choices you give” box. Or a “I know the an­swer but don’t want to tell you” box. Etc.

Or, not. Much as peo­ple get an­noyed by be­ing asked to cat­e­go­rize them­selves, that is ba­si­cally the point of this sort of sur­vey, and no­body is obli­gated to take it. There’s no par­tic­u­lar rea­son you should change your strat­egy to alle­vi­ate our an­noy­ance.

There’s also a val­i­da­tion is­sue. A blank could mean “I ac­ci­den­tally scrol­led past this ques­tion with­out notic­ing it”. The stan­dard for on­line sur­veys is to (where ap­pro­pri­ate) in­clude choices for “Other”, “None”, and “Pre­fer not to an­swer”, and then force a re­sponse for ev­ery ques­tion so that you know noth­ing was ac­ci­den­tally skipped.

That said, on­line sur­veys of­ten fail at this, for in­stance hav­ing “gen­der” ques­tions with just the 2 op­tions (they should at least have an “other”) or only ac­cept­ing as “valid” an­swers that do not fit the en­tire pop­u­la­tion (For ex­am­ple, a sur­vey for doc­tors with no ex­plicit age cut­off limited ages to <99; at the time, there was one prac­tic­ing doc­tor older than that—he would just have been given an er­ror mes­sage that his age was “in­valid”.)

Would it be pos­si­ble in the fu­ture, rather than hav­ing a write-in or group iden­ti­fi­ca­tion, to do some­thing like poli­ti­cal com­pass co­or­di­nates? This would have the benefit of al­low­ing peo­ple to ex­press views that don’t fit into camps with­out hav­ing the op­por­tu­nity to write lots of words no one will read.

Right now for the poli­tics ques­tion, you have three(!) differ­ent strains of ne­oliber­al­ism, so­cial democ­racy, and Stal­inism. That’s hardly rep­re­sen­ta­tive of the global poli­ti­cal spec­trum, and I’m hon­estly sur­prised that any­one de­sign­ing that ques­tion on a sur­vey would make that mis­take.

Right now for the poli­tics ques­tion, you have three(!) differ­ent strains of ne­oliber­al­ism, so­cial democ­racy, and Stal­inism.

Alter­na­tive com­plaints:

Right now for the poli­tics ques­tion, you have three(!) differ­ent strains of leftism, liber­tar­i­anism, and con­ser­vatism. That’s hardly rep­re­sen­ta­tive of the his­tor­i­cal poli­ti­cal spec­trum, and I’m hon­estly sur­prised that any­one de­sign­ing that ques­tion on a sur­vey would make that mis­take.

… or:

Right now for the poli­tics ques­tion, you have four(!) differ­ent strains of statism, and liber­tar­i­anism. That’s hardly rep­re­sen­ta­tive of the di­ver­sity of ide­olo­gies, and I’m hon­estly sur­prised that any­one de­sign­ing that ques­tion on a sur­vey would make that mis­take.

Yes, an­ar­chists, monar­chists, theocrats, etc. might ob­ject that their view isn’t rep­re­sented, but I think that limit­ing the pos­si­bil­ities was still the right choice (see also the ob­jec­tions to the gen­der ques­tion). Keep­ing the fo­cus on LessWrong away from poli­tics seems best.

The cur­rent limi­ta­tion of pos­si­bil­ities doesn’t keep the fo­cus on LessWrong away from poli­tics. It fo­cuses on cer­tain types of poli­tics.

Fur­ther, if you’re call­ing La­bor or the Democrats leftist, or the Liber­tar­ian party anti-state, you’re just wrong by al­most any met­ric worth car­ing about.

It wouldn’t have been hard to have one op­tion for each of cap­i­tal­ist/​pro-state, leftist/​pro-state, cap­i­tal­ist/​anti-state, and leftist/​anti-state. That would have cap­tured all mod­ern poli­ti­cal al­ign­ments, and any­thing more spe­cific could be an­other op­tion.

As it stands, that ques­tion is to­tally use­less to me, and prob­a­bly to most other leftists. So any con­clu­sion like “women are more likely to be so­cial­ists” will be equally mean­ingless. Most so­cial­ists don’t even con­sider Euro­pean so­cial democ­ra­cies to be so­cial­ist.

I took the sur­vey and re­ally en­joyed it. Thanks! It was mostly clear but I’m not gonna lie—had to look up the moral­ity defi­ni­tions (ex­cept con­se­quen­tial­ism). Per­haps a very brief defi­ni­tion would help.

One prob­lem with the poli­ti­cal ques­tion: So­cial­ism is not what they have in Scan­d­i­navia. That would be so­cial democ­racy (tech­ni­cally a form of gov­ern­ment that’s sup­posed to evolve to­wards full so­cial­ism, but they don’t seem to have done that). It’s un­clear what op­tion one is sup­posed to choose to mean “What they have in Scan­d­i­navia” rather than ac­tual so­cial­ism.

poli­ti­cal words like “so­cial­ism” mean very differ­ent things in differ­ent places, so a de­scrip­tion like “what they have in Scan­d­i­navia” is sup­posed to pin down the ex­ten­sion enough for you to work out the in­ten­sion.

To me so­cial­ism is sup­posed to mean col­lec­tive own­er­ship of means of pro­duc­tion (through co­op­er­a­tives, gov­ern­ment or any other mean), not “just” wealth dis­tri­bu­tion within a globally cap­i­tal­ist econ­omy.

Put then, the “parti so­cial­iste” in France is so­cial-demo­crat, not want­ing so­cial­ism...

Even when there is no will to make things ac­tu­ally fuzzy, words are some­times treach­ery. When in a field like poli­tics, they are abused from in var­i­ous ways… and when you add cul­tural differ­ences and lossy pro­cess like trans­la­tion on top of all that… wel­come to the joy of not un­der­stand­ing each other at all.

I guess that’s what he put the de­tails about what he meant for each word. We may not agree on the la­bels, but we un­der­stand from the de­scrip­tion in which cat­e­gory we fit the best.

I took the sur­vey, but un­for­tu­nately, when I saw “If you don’t know enough about the propo­si­tion to have an opinion, please leave the box blank”, I left all of the prob­a­bil­ity boxes blank af­ter­wards be­cause I just didn’t feel like I could give an an­swer I would be happy with, even for some of the ques­tions that could be de­scribed as clear-cut. Maybe next sur­vey I’ll be able to provide more use­ful de­tails.

I took the sur­vey. I would trust my prob­a­bil­ities for aliens, es­pers, and time trav­el­ers as far as I can throw them. I don’t re­ally think any num­ber I could give would be rea­son­able ex­cept in the weak sense of not com­mit­ting the con­junc­tion fal­lacy.

I sec­ond the an­chor­ing effect in the Sin­gu­lar­ity ques­tion. Based on pre­vi­ous com­ments I had writ­ten be­fore, I would have ex­pected a far more dis­tant year than the one I gave in the sur­vey. Oops.

Also, I missed the Prin­cipia ques­tion by ten years, and gave my­self 80% con­fi­dence. I don’t know if that was good or bad. How would I go about es­ti­mat­ing what my con­fi­dence should have been?

I was dis­ap­pointed that math­e­mat­ics fell un­der the “hard sci­ences”, but I sup­pose we can’t all have our own cat­e­gory.

Re the poli­tics ques­tion, I’m not a com­mu­nist but I don’t think any sane mod­ern com­mu­nists would use the so­viet union as an ex­am­ple of com­mu­nist gov­ern­ment. They offi­cially claimed the gov­ern­ment was a tran­si­tional stage to­wards self gov­ern­ing col­lec­tive utopia.

In Soviet par­lance, the Soviet Union was a so­cial­ist so­ciety but could be fairly de­scribed as hav­ing a com­mu­nist gov­ern­ment. Of course if you’re an anti-re­vi­sion­ist or Trot­sky­ist or Judean Pop­u­lar Front or the like things get more com­pli­cated, but my guess is that any­body who self-de­scribes as “com­mu­nist” will have picked that op­tion re­gard­less of the de­scrip­tion, which is, to be sure, weird on a cou­ple of lev­els. Like most fringe-but-widely-known groups they’re used to be­ing de­scribed in ways that are slightly off.

Edited to add: I posted my own ideas con­cern­ing SI and so­cial busi­ness in the com­ments. What are yours? Also, ad­dress­ing some valid points made in the com­ments, what are some other in­no­va­tive ways to fund SI?

True. It might be in­ter­est­ing to see if any hid­den com­mon­al­ities among Less Wron­gians ex­ist, how­ever, if the “Other” op­tion comes along with a “fill-in-the-blank” field. It might also be a good idea to in­clude this “Other” op­tion in ad­di­tion to the other op­tions to avoid ev­ery­one check­ing “Other”.

I didn’t like the ethics ques­tion, be­cause it could be in­ter­preted as ask­ing about one’s the­o­ret­i­cal po­si­tion on metaethics, or about one’s ac­tual val­ues, and the two can di­verge. Speci­fi­cally: I bet there are quite a lot of peo­ple on LW for whom some­thing like the fol­low­ing is true: “I don’t be­lieve that moral judge­ments have ac­tual truth val­ues sep­a­rate from the val­ues of the peo­ple or in­sti­tu­tions that make them. But I do have val­ues, and I do make moral judge­ments, and the way I do so is: [...]”.

This ques­tion also heav­ily de­pends on the ir­rele­vant fact of whether FAI should keep var­i­ants of origi­nal in­di­vi­d­u­als, or there is some­thing bet­ter that it should there­fore do in­stead. In 1000 years, it’s FAI or bust, so this di­rectly con­trols the an­swer. But pre­sum­ably mo­ti­va­tion for this ques­tion is “Will the fu­ture be good in this here sense?”, while the es­ti­mate is lower if the fu­ture can be even bet­ter...

I’d be in­ter­ested to know what pro­por­tion gave an es­ti­mate for 1000 year lifes­pans which is at least as high as their es­ti­mate for re­vival from cry­on­ics.

I sup­pose it’s pos­si­ble that sus­pended an­i­ma­tion is in­com­pat­i­ble with great longevity for those al­ive now, but it’s hard to think of a mechanism. Per­haps ge­netic mod­ifi­ca­tion is re­quired for longevity, and the tech for re­vival can’t simu­late that.

Hy­po­thet­i­cal: if that were the case, would it be bet­ter not to thaw out cry­on­ics pa­tients as soon as it be­comes pos­si­ble to, in the hopes that the longevity prob­lem would be solved in the fu­ture?

I sup­pose it de­pends on how likely re­ju­ve­na­tion is to be solved. If it’s look­ing un­solv­able, then re­viv­ing the per­son asap makes sense—there’s prob­a­bly less cul­ture shock in deal­ing with a less dis­tant fu­ture.

Another proof that sur­vey de­sign is hard: should I an­swer “yay male/​male sex, I strongly sup­port same-sex ” or “boo male/​male sex, I am not in­ter­ested?” Or, tak­ing a page from Ali­corn’s book, what about those who say “yay male/​male sex, I’d like to be in­ter­ested in men?” (I’d ex­pect this to be a statis­ti­cally de­tectable por­tion of test-tak­ers.)

Also, mak­ing peo­ple write es­says just to throw them away is not a ter­ribly pro­duc­tive use of any­one’s time.

In the mean­time, I sup­pose in­di­vi­d­u­als can ap­prox­i­mate the same be­hav­ior by writ­ing such things in a file on their hard drive. It won’t af­fect pro­cess­ing of the sur­vey, of course, but then it wouldn’t re­ally do so any­way.

Longer-term, pre­sum­ably the goals we want to achieve with a ques­tion should drive the op­tions we provide for an­swers. If we want to cor­re­late de­mo­graphic cat­e­gory with other an­swers, then we re­ally don’t care about de­mo­graphic cat­e­gories that cover fewer than 5% or so of the pop­u­la­tion, since such cor­re­la­tions would be even less use­ful than baseline, but we do care about stan­dard­iz­ing an­swers. If we want to know how LessWrong read­ers iden­tify them­selves be­cause we’re cu­ri­ous, we don’t re­ally care about stan­dard­iz­ing an­swers, but we do want to let re­spon­dents use their own terms to de­scribe them­selves. Etc.

I took your sur­vey. There may be small er­rors in a cou­ple of my an­swers. I can hardly wait to see your ex­pla­na­tion of what you are do­ing with those “cal­ibra­tion ques­tions” like “what is your es­ti­mate of the prob­a­bil­ity that your an­swer to New­ton’s Prin­cipia pub­li­ca­tion date is within 15 years of the cor­rect an­swer”?

Also if there is some sort of sam­pling the­ory sur­vey­ing prac­tice FAQ that ex­plains the use of such ques­tions I would be in­ter­ested in read­ing it.

I didn’t like it be­cause some of the ques­tions offered too nar­row a range of an­swers for my taste. Ex­am­ple: I con­sider the “many wor­lds” hy­poth­e­sis to be ob­jec­tively mean­ingless (be­cause there’s no pos­si­ble ex­per­i­ment that can test it). The same goes for “this uni­verse is a simu­la­tion.”

As for the “sin­gu­lar­ity”, I see it as nearly mean­ingless too. Every defi­ni­tion of it I’ve seen amounts to a hori­zon, be­yond which the fu­ture (or some as­pects of it) will be uni­mag­in­able—but from how far past? Like a phys­i­cal hori­zon, if such a “limit of vi­sion” ex­ists it must re­cede as you ap­proach it. Even a cliff can be looked over.

It’s a log­i­cal con­se­quence of the premises. The in­stant there’s a split, all branches ex­cept the one you’re in be­come to­tally and per­ma­nently un­reach­able by any means what­ever. If they did not, the con­ser­va­tion laws would be vi­o­lated.

If all other in­ter­pre­ta­tions made testable pre­dic­tions, it wouldn’t be enough un­less you could some­how elimi­nate any pos­si­bil­ity that didn’t make the list be­cause no­body’s thought of it yet. It’s like the fal­lacy in Pas­cal’s Wager: all pos­si­ble re­li­gions be­long in the hat.

So if for thou­sands of years sci­ence can’t think of any­thing bet­ter than hid­den vari­ables of the gaps, col­lapse at a level we can’t de­tect be­cause of its scale, and MWI, MWI is “ob­jec­tively mean­ingless”? If some­how the room for hid­den vari­ables is elimi­nated, and the col­lapse is falsified, it’s still “ob­jec­tively mean­ingless”?

So let me state my un­der­stand­ing with the in­flec­tion of a ques­tion so you know it re­quests a re­sponse… If (for thou­sands of years, sci­ence can’t think of any­thing bet­ter than [hid­den vari­ables of the gaps && col­lapse at a level we can’t de­tect be­cause of its scale && MWI]) then (MWI is “ob­jec­tively mean­ingless”).

I don’t know what you mean by “sci­ence can’t think of any­thing bet­ter”.

I’m sim­ply us­ing the stan­dard that a state­ment is ob­jec­tively mean­ingful if it states some alleged ob­jec­tive fact.

I re­ject the no­tion of hid­den vari­ables (ex­cept pos­si­bly the core of one­self, the ex­is­tence of the ego) as un-Bayesian. With that one po­ten­tial ex­cep­tion, all ob­jec­tive facts are testable, at least in prin­ci­ple (though some may be im­prac­ti­cal to test).

I fail to see how one can be ra­tio­nal and not be­lieve that. I’m not say­ing this to in­sult, but to get an ex­pla­na­tion of what you think I’ve over­looked.

The com­par­a­tive karma of my com­ments to the sur­round­ing com­ments also seems to mat­ter to me. Speci­fi­cally if am ar­gu­ing with some­one who is say­ing some­thing trans­par­ently log­i­cally ab­surd and their com­ments are higher than mine it in­vokes both dis­gust and con­tempt.

In fact, since the de­fault ten­dency is for de­scen­dant com­ments to score lower than their par­ents, I find it par­tic­u­larly in­sult­ing when­ever a di­rect re­ply to one of my com­ments has a higher score (if there is any challenge or dis­agree­ment in­volved).

BTW, I won­der if the “karma for the last 30 days” me­ter counts the karma for stuff which I wrote in the last 30 days, or for what­ever was up/​down­voted in the same pe­riod, no mat­ter how long ago I wrote it.

Took it. It might be worth differ­en­ti­at­ing be­tween peo­ple who iden­tify with a par­tic­u­lar poli­ti­cal group and peo­ple who just hap­pen to skew a lit­tle more in one di­rec­tion than an­other.

Some of my prob­a­bil­ities might be a bit off, too, as I’m not en­tirely sure about fac­tor­ing x-risks into the lifes­pan ques­tions. A bet­ter way of spec­i­fy­ing var­i­ous very small prob­a­bil­ities would also be ap­pre­ci­ated.

I had fun do­ing the back­ground re­search to be able to give a num­ber to the P(Aliens) ques­tions. :) The topic has, of course, come up many times, but never be­fore for me in as­so­ci­a­tion with a com­mu­nity where the so­cial norms fa­vored a care­ful, quan­ti­ta­tive an­swer.

When an­swer­ing the New­ton ques­tion, I was sur­prised at the shape of my prob­a­bil­ity dis­tri­bu­tion for the an­swer. It definitely wasn’t a gaus­sian, a uniform dis­tri­bu­tion, or other form that I’ve worked with. This was sim­ply due to the knowl­edge I started with, which was vague propo­si­tions rather than mea­sure­ments. (i.e. I knew the right cen­tury and had a good idea when New­ton was born, but didn’t know when he died.) I’m quite cu­ri­ous what the dis­tri­bu­tion of re­sponses will be for the year, since a his­tor­i­cal date is the sort of thing we’d ex­pect hu­mans to make er­rors on, but not gaus­sian er­rors.

I had fun do­ing the back­ground re­search to be able to give a num­ber to the P(Aliens) ques­tions.

I en­joyed this too. Tried to cal­ibrate Aliens 1 with Aliens 2, and found that what seemed like a mod­est es­ti­mate for Aliens 2 (still a shot in the dark due to too many Drake un­knowns, but what the hell) cre­ated an enor­mous prob­a­bil­ity es­ti­mate for Aliens 1. More con­vinced than ever that we are not alone.

I think this con­fuses model with in­ter­pre­ta­tion. It’s clear that the model makes good pre­dic­tions, and is in some sense cor­rect. In­ter­pre­ta­tions are a ques­tion of what else is be­hind the model—if it is mak­ing sub­stan­tially differ­ent pre­dic­tions, it is an in­ter­pre­ta­tion of a differ­ent model.

Why do you as­sign iden­ti­cal pri­ors to all em­piri­cally equiv­a­lent in­ter­pre­ta­tions?

Why shouldn’t I? I only pre­fer the sim­pler of two sto­ries that make ev­ery­where and always iden­ti­cal pre­dic­tions, be­cause it’s more pleas­ant—but I can’t find it any more likely. I thought the no­tion of a uni­ver­sal prior was to nor­mal­ize to the short­est equiv­a­lent de­scrip­tion. If col­lapse vs. many wor­lds are equiv­a­lent in their pre­dic­tions, then my uni­ver­sal prior gives the same an­swer for them both.

You re­ally think there is ba­si­cally no chance of a col­lapse or hid­den vari­able in­ter­pre­ta­tion be­ing true? Why?

You slightly mi­s­un­der­stood me. As far as I un­der­stand them, they’re all equiv­a­lent with re­spect to any mea­sure­ments I can perform. So I give them all near 100% chance of be­ing “more or less cor­rect”.

I thought the no­tion of a uni­ver­sal prior was to nor­mal­ize to the short­est equiv­a­lent de­scrip­tion. If col­lapse vs. many wor­lds are equiv­a­lent in their pre­dic­tions, then my uni­ver­sal prior gives the same an­swer for them both.

The “equiv­a­lent” in your char­ac­ter­i­za­tion of the uni­ver­sal prior does not mean “em­piri­cally equiv­a­lent”. If you read it that way, then you’re not do­ing Solomonoff in­duc­tion.

You slightly mi­s­un­der­stood me. As far as I un­der­stand them, they’re all equiv­a­lent with re­spect to any mea­sure­ments I can perform.

This is false. There are pos­si­ble ex­per­i­ments that dis­t­in­guish many wor­lds from its col­lapse and hid­den vari­able com­peti­tors.

I wasn’t claiming to do Solomonoff in­duc­tion, or claiming to use a uni­ver­sal prior. I think you know the defi­ni­tions of those bet­ter than I do, but I’m not sure you un­der­stood that I stipu­lated that the com­pet­ing the­o­ries be em­piri­cally equiv­a­lent ev­ery­where and always—not just in my ex­pe­rience so far. I don’t know of any stronger no­tion of equiv­alence, so if you’d like to spec­ify what equiv­alence you think I should be us­ing, I’m all ears (I do know that there are syn­tac­ti­cally ver­ifi­able equiv­alences, but I don’t con­sider those to be any stronger).

There are pos­si­ble ex­per­i­ments that dis­t­in­guish many wor­lds from its col­lapse and hid­den vari­able com­peti­tors.

Maybe. Although I don’t com­pletely un­der­stand QM, I’ve heard that MWI is ex­per­i­men­tally in­dis­t­in­guish­able from at least one other in­ter­pre­ta­tion. I’d ap­pre­ci­ate a refer­ence to any ex­per­i­ment that should sep­a­rate MWI from its com­peti­tors.

Con­sider a con­spir­a­to­rial in­ter­pre­ta­tion of quan­tum me­chan­ics ac­cord­ing to which the uni­verse is gen­uinely lo­cal and de­ter­minis­tic, but the ini­tial con­di­tions of the uni­verse are jerry-rigged so that all mea­sure­ments made by sen­tient crea­tures fit quan­tum statis­tics (even though events in gen­eral do not). This the­ory is em­piri­cally equiv­a­lent to many wor­lds. It seems clear that there are sev­eral senses in which it is not equiv­a­lent to many wor­lds. And I think there is good rea­son to as­sign it sub­stan­tially lower prior prob­a­bil­ity than many wor­lds, since one would need to spec­ify the en­tire ini­tial con­di­tion of the uni­verse in or­der to pre­dict cor­re­la­tions that many wor­lds pre­dicts based sim­ply on Schrod­inger’s equa­tion.

That’s a use­ful demon­stra­tion of the in­tu­ition be­hind “sim­pler is more plau­si­ble”. Still, if it were pos­si­ble to know that your jury-rigged-setup story were ev­ery­where and always (not just up-til-now) em­piri­cally equiv­a­lent to MWI or what­ever, then I’d re­ally bite the bul­let and call it ab­solutely equiv­a­lent.

David Deutsch has a pa­per called “Three ex­per­i­men­tal im­pli­ca­tions of the Everett in­ter­pre­ta­tion”. I can’t find it on­line, un­for­tu­nately. The ex­per­i­ments are in­fea­si­ble with cur­rent tech­nol­ogy, but the fact re­mains that many wor­lds makes differ­ent pre­dic­tions than or­tho­dox QM.

The ba­sic idea is easy to grasp. Copen­hagen says there are cer­tain sorts of sys­tems (ob­servers, or mea­sur­ing de­vices) that can col­lapse su­per­po­si­tions but do not them­selves en­ter into su­per­posed states. Many wor­lds says that these sys­tems do en­ter into su­per­po­si­tions. There are pos­si­ble mea­sure­ments (very difficult to con­duct, ad­mit­tedly, given the size of these sys­tems) that can tell us whether or not such a sys­tem is in a su­per­posed state.

“Three ex­per­i­men­tal im­pli­ca­tions of the Everett in­ter­pre­ta­tion”. The ex­per­i­ments are in­fea­si­ble with cur­rent tech­nol­ogy, but the fact re­mains that many wor­lds makes differ­ent pre­dic­tions than or­tho­dox QM.

For the Ex­is­ten­tial Risk ques­tion, I would have liked to see an op­tion for so­cietal col­lapse. It wouldn’t have been my num­ber one op­tion, but I think the prospect of mul­ti­ple stres­sors in con­junc­tion, such as in­ter­na­tional eco­nomic and food crises, lead­ing to a break­down of mod­ern civ­i­liza­tion is more likely than a num­ber of other op­tions already on the list.

I think the prospect of mul­ti­ple stres­sors in con­junc­tion, such as in­ter­na­tional eco­nomic and food crises, lead­ing to a break­down of mod­ern civilization

Okay, but… in­clud­ing the deaths of 90% of hu­man­ity? That’s the stick­ing point, for me—I could see maybe 50% of hu­man­ity, but 90 seems like too much. (90 seems like too much for nu­clear war, too, for that mat­ter.)

If so­ciety col­lapses, we would lose the abil­ity to sup­port most of hu­man­ity. I wouldn’t ex­pect it to re­sult in the loss of 90+% of the pop­u­la­tion within the space of a decade, but I could definitely see it drop­ping by that much.

I don’t think it’s all that likely, but I would definitely rate it above a nat­u­ral pan­demic wiping out 90% or more of the pop­u­la­tion.

I don’t re­ally un­der­stand why di­vorced would be sep­a­rate from sin­gle and look­ing (or sin­gle and not look­ing, if the mar­riage was es­pe­cially trau­ma­tiz­ing). Also, one could be mar­ried and look­ing if one is polyamorous.

I took the sur­vey, but didn’t read any­thing af­ter “Click Here to take the sur­vey” in this post un­til af­ter­wards.

So my apolo­gies for be­ing ex­tremely pro­gram-hos­tile in my an­swers (ex­plic­itly say­ing “ep­silon” in­stead of 0, for in­stance, and giv­ing a range for IQ since I had mul­ti­ple tests). Per­haps I should re­take it and ask you to throw out the origi­nal.

I did have one other large prob­lem. I wasn’t re­ally clear on the re­li­gion ques­tion. When you say “more or less right” are you talk­ing about cos­mol­ogy, moral philos­o­phy, his­tor­i­cal ac­cu­racy? Do you con­sider the an­cient texts, the his­tor­i­cal tra­di­tions, or what the most ra­tio­nal (or most ex­treme) mod­ern ad­her­ents tend to be­lieve and prac­tice? If an­cient texts and his­tor­i­cal tra­di­tions, judg­ing rel­a­tive to their con­text or rel­a­tive to what is known now? My judge­ment of the prob­a­bil­ity would vary any­where from ep­silon to 100-ep­silon de­pend­ing on the stan­dard cho­sen, so it was very hard to pick a num­ber. I ended up go­ing with what I con­sid­ered less wrong con­ven­tion and chose to judge re­li­gions un­der the harsh­est rea­son­able terms, which re­sulted in a low num­ber but not ep­silon (I con­sid­ered judg­ing an­cient texts, or the most re­ac­tionary be­liev­ers by mod­ern stan­dards, to be un­rea­son­ably strict).

Took they sur­vey. In­ter­ested in the re­sults. In­ter­est­ingly enough, I have had an ac­count for a month or two now, but have not posted any­thing un­til now. Thanks for putting this to­gether Yvain.

I just finished the sur­vey. I had given my­self a 15% prob­a­bil­ity of be­ing cor­rect on the New­ton ques­tion, and was off by sig­nifi­cantly over 15 years. How­ever, I should have cal­ibrated that as 30%, as I knew the cen­tury but had no idea when in the cen­tury he pub­lished the book.

How­ever, I should have cal­ibrated that as 30%, as I knew the cen­tury but had no idea when in the cen­tury he pub­lished the book.

Yes! I made the same mis­take.

If you know the cen­tury, there are only about (10/​3) mu­tu­ally-ex­clu­sive 30-year pe­ri­ods. Thus, the low­est your max­i­mum prob­a­bil­ity out of all 30-year pe­ri­ods should be about 30%, and the one that you ac­tu­ally guessed should be at least a lit­tle higher than that. (of course, if your guess is within 15 years of the cen­tury bound­aries, some of that prob­a­bil­ity mass is go­ing to get splinched).

Nanoweapons that aren’t used to kill ev­ery­one aren’t an ex­is­ten­tial threat, they’re just a threat to the en­e­mies of the peo­ple with the nanoweapons. I guess you could ar­gue that nano-pro­lifer­a­tion could set up a sce­nario like we have now with the nu­clear stand­off, but we already have a situ­a­tion like that, with the nu­clear stand­off. Not easy to see why that should be more wor­ri­some.

In­creas­ing the num­ber of pos­si­ble weapons that can con­tribute to to­tal war in­creases the chances that such a war will oc­cur es­pe­cially if the num­ber of ac­tors who have them goes up. Worse, if nanoweapons turn out to be eas­ier to make than nukes once one has the ba­sic knowl­edge, then a Sad­dam Hus­sein or a Ghaddafi type could eas­ily ruin ev­ery­one’s day.

Done. Seemed like a pretty good sur­vey over­all. Like oth­ers, I was con­fused by some ques­tions though. Didn’t know how to an­swer fam­ily re­li­gion, es­pe­cially since I wasn’t sure how far back I was sup­posed to look. Also, how ex­actly would it be de­ter­mined when the sin­gu­lar­ity oc­curs? The mo­ment hu­man-level ai is reached? Seems to me that it would be more of a grad­ual (though still rel­a­tively sud­den, all things con­sid­ered) pro­cess.

The prob­a­bil­ity ques­tions were in­ter­est­ing. I guess the ques­tions about New­ton and IQ rel­a­tive to the av­er­age were there to ac­count for less wrong over/​un­der­con­fi­dence? Either way, since I didn’t have an IQ score handy there was only one ques­tion, which I could have got­ten right by ac­ci­dent. Would have liked to see a few more along those lines. (Heck, I would re­ally like to see a “judge your own ra­tio­nal­ity” test on Less Wrong, pe­riod. Any­one done this yet?)

I’m fairly sure there is no cry­on­ics available in my area—per­haps this could be added as an op­tion in fu­ture sur­veys?

I felt I didn’t have a strong ba­sis to an­swer many of the P(x) ques­tions, but I an­swered some as best I could, and left oth­ers blank. I also wasn’t sure whether be­ing a reg­u­lar poster on an athe­ism fo­rum would count as be­ing an ac­tive mem­ber of a com­mu­nity—I se­lected “no”.

Surely “me­dian date” just means the date at which it’s equally likely to oc­cur be­fore as af­ter. That is, if the sin­gu­lar­ity has a 30% chance of ever hap­pen­ing, it’s the date be­fore which it’s 15% likely to hap­pen.

That as­sumes you in­ter­pret not hap­pen­ing as be­ing a sep­a­rate third cat­e­gory, but for these pur­poses it seems more rea­son­able to con­sider it as always hap­pen­ing af­ter (i.e. hap­pen­ing at time in­finity), since we want lower prob­a­bil­ity of it hap­pen­ing soon to cause the me­dian date to in­crease.

I dis­liked the moral philos­o­phy ques­tion. I felt com­fortable putting down “con­se­quen­tial­ist,” but I can see how some­one might feel none of the an­swers suited them well. I would have made the fourth op­tion sim­ply “other,” and maybe added a moral re­al­ism vs. anti-re­al­ism ques­tion.

See the Phil Papers sur­vey. On the nor­ma­tive ethics ques­tion, “other” beat out the three “stan­dard” moral philoso­phies, and there’s no in­di­ca­tion that ev­ery­one in that cat­e­gory is a moral anti-re­al­ist.

Also, for the New­ton ques­tion:

My an­swer: frira­grra bu svir

Cor­rect an­swer: fvk­grra rv­tugl frira

Now I feel dumb for putting such a high con­fi­dence in my an­swer. Should I feel dumb?

I guess if I had thought about it more, I would have re­al­ized that my con­fi­dence that my 30 year range was not too low ex­ceeded my con­fi­dence that it was not too high, and ad­justed my an­swer down­wards a few years, ac­cord­ingly.

In the sin­gu­lar­ity year ques­tion, I in­ter­preted that to mean “50% that a sin­gu­lar­ity oc­curs be­fore YYYY, 50% that ei­ther it oc­curs later or it never oc­curs at all; leave blank if you think it’s less than 50% that it ever oc­curs”, even though, taken liter­ally, the first part of the ques­tion sug­gests “50% that the sin­gu­lar­ity oc­curs be­fore YYYY, given that it ever oc­curs”. Given that my prob­a­bil­ity that no sin­gu­lar­ity will ever oc­cur is non-neg­ligible, these in­ter­pre­ta­tions would re­sult in very differ­ent an­swers.

Yes. My es­ti­mate was based on “Keep adding years un­til the cu­mu­la­tive prob­a­bil­ity is 50%”, which did even­tu­ally ter­mi­nate, but at a much higher year than if I were to as­sume it is to oc­cur.

Given what the pres­ence of just one per­son who be­lieves the prob­a­bil­ity that a sin­gu­lar­ity will ever oc­cur is about 50.01% and who ap­plies this heuris­tic I hope the re­sults of the sur­vey aren’t limited to giv­ing us the mean!

If you look at the re­sults of the last sur­vey, that’s ex­actly what hap­pened, and the mean was far higher than the me­dian (which was re­ported along with the stan­dard de­vi­a­tion). I agree, it would have been a big im­prove­ment to spec­ify which sense was meant.

Also, an­swer­ing year such that P( | ) would be the best way to get a dis­tri­bu­tion of an­swers on when it is ex­pected. So that’s what I did. If you in­ter­pret the ques­tion the other way, then any­one with a 30-49.9999% chance of no sin­gu­lar­ity, has to put a date that is quite far from where most of their prob­a­bil­ity mass for when it oc­curs lies.

Sup­pose I be­lieve that there is a .03% prob­a­bil­ity of a sin­gu­lar­ity for each of the next 1000 years, and then de­cay­ing by 1⁄2 ev­ery thou­sand years af­ter that. That puts my to­tal sin­gu­lar­lty prob­a­bil­ity in the 52% range, with about half of my prob­a­bil­ity mass con­cen­trated in the next 1000 years. But to an­swer this ques­tion liter­ally, the date I’d have to give would be around 7000AD, even though I would think it was about as likely to hap­pen by 3011AD as af­ter 3011AD.

Hmm. For the anti-agath­ics ques­tion I’m won­der­ing if I should be tak­ing into ac­count the prob­a­bil­ity of x-risk be­tween now and 3011. The ques­tion looks like it’s about our tech­ni­cal abil­ity to solve ag­ing, which means I should an­swer with P(some­one lives to 1000 | no XK-class end-of-the-world sce­nario be­tween then and now)? (Though of course that con­di­tional is not what was writ­ten.)

Similarly here- I an­swered the cry­on­ics/​anti-ag­ing/​x-risk ques­tions for the typ­i­cal Everett branch, since I pre­sume that makes them com­pa­rable to the re­sponses of peo­ple who find MWI less likely.

My an­swer was 17 years off, and I gave 60% con­fi­dence. (As­sum­ing a Gaus­sian dis­tri­bu­tion, 60% con­fi­dence for +/​- 15 years means a stan­dard de­vi­a­tion of 17.8 years, so I still was within 1 sigma.)

Also, “too high”? Se­ri­ously? The log-odds against (x − μ)/​σ be­ing more than 19 are about 800 dB; I’m not sure I’d be com­fortable with as­sign­ing such a great con­fi­dence about a non-tau­tolog­i­cal propo­si­tion about the real world. (Ex­cept “Emile will tor­ture 3^^^3 peo­ple un­less I give him/​her $5” and similar, of course.) :-)

I’ll bet 100 bit­coins against .00000001 bit­coins that Sir Isaac New­ton will not pub­lish the his­tor­i­cal Prin­cipia Math­e­mat­ica next week.

Edit: After con­sid­er­ing the ad­di­tional coin­flips re­quired to bring even that large a differ­ence in money up to the rele­vant level, I think I’m go­ing to with­draw my offer. Be­fore I earned back my stake lay­ing bets like that, I’d run into a situ­a­tion where time travel had been com­mon­place for cen­turies but there was a huge con­spir­acy to keep it se­cret from me, or some­thing like that.

Yup. Un­for­tu­nately, bit­coins are not cur­rently sub­di­vid­able any fur­ther than that, and I’m not rich enough to bet more. How­ever, I’d be will­ing to throw in “and you don’t have to pay up the .00000001 bit­coin un­less a coin comes up heads 220ish times in a row.”

Is this a gen­eral method for ad­just­ing bets on long odds that make money im­prac­ti­cal? I just thought of it.

I would take that bet, ex­cept that I am in­suffi­ciently sure in my un­der­stand­ings of the rest of re­al­ity if I hap­pen to win to be con­fi­dent that I’d want 100 bit­coins in that even­tu­al­ity.

ETA: I should note that I didn’t run the num­bers, 0.00000001 bit-coins is some­thing I’d be will­ing to risk on a 1:2^220 chance for the amuse­ment in­volved. It should not be taken to re­flect a gen­eral policy of ac­cept­ing wa­gers at what my es­ti­mate of these odds would be if I did de­cide to work them out more rigor­ously...

Not if for some rea­son you are nearly sure that it was be­fore/​af­ter a cer­tain date (which I wasn’t); I felt that to a first ap­prox­i­ma­tion a nor­mal dis­tri­bu­tion de­scribed my be­liefs (as of the time I was an­swer­ing) de­cently enough, but YMMV.

I was en­tirely sure (20 deci­bels, at least) it was be­fore gur Nzrevpna Eriby­hgvba. That plus “some padding but not too much” got me within the mar­gin of er­ror, but I only gave 2 deci­bels of con­fi­dence that it would be.

For my­self I con­fused New­ton’s birth date and the date of the Prin­cipia Math­e­mat­ica :/​ So I was off more than 15 years, but still not too bad. I gave a 50% con­fi­dence to it, 15 years is too short on that time frame, my mem­ory of dates isn’t good enough.

Yeah, but the fact that my es­ti­mate was pretty close to the cor­rect date sug­gests that some un­der­con­fi­dence may have been at work. If some­one had stated the ex­actly cor­rect year, and had es­ti­mated only a 51% chance that they were in the cor­rect zone, we’d prob­a­bly look at them funny.

Maybe, but get­ting very close with low con­fi­dence is en­tirely pos­si­ble with these es­ti­ma­tion-cal­ibra­tion tasks: a uniformly cho­sen year be­tween 1600-1800 could be the ex­act year but the con­fi­dence of such a guess is always 15%.

Came out of ac­tivity hi­ber­na­tion to take this. Thanks for see­ing a thing that needed do­ing and choos­ing to do it!

Prob­lems with the gen­der field have already been dis­cussed; the sex­u­al­ity ques­tion has some of the same is­sues. “Gay” and “straight” don’t re­ally make sense for peo­ple with non­bi­nary gen­der, and many peo­ple in­ter­pret “bi­sex­ual” as refer­ring to “both” gen­ders (male and fe­male), as op­posed to a more in­clu­sive “queer” or “pan­sex­ual.” I do hon­estly ap­pre­ci­ate how much effort you’ve put into mak­ing the sur­vey as in­clu­sive as it already is, though.

One more long time lurker (over RSS) who just cre­ated an ac­count to take the sur­vey and com­ment. Prob­a­bly my fa­vorite sur­vey I’ve ever taken, I’ll di­rect a few friends to it as well and try to get them to start read­ing the site.

Just finished the sur­vey. I’m very much an LW lurker, who ap­par­ently suc­cumbs to some type of self-con­fi­dence bias. Though I know noth­ing of prob­a­bil­ity the­ory (thus why a lot of the ques­tions were left blank), I gave my­self a 10% chance for the pub­lish­ing-ques­tion. (Was that a ran­dom­ized ques­tion?) After a bit of con­sid­er­a­tion, I said [YEAR]—it was first pub­lished in [YEAR + 37]. I wasn’t too far off.

Maybe that same bias is what de­ters me from ever ac­tu­ally post­ing any­thing.

You should think about delet­ing the year, it screws with the cal­ibra­tion ques­tion. This ques­tion was put in to test the qual­ity of your guesses, or more speci­fi­cally the qual­ity of the prob­a­bil­ities you as­signed.
I read your com­ment be­fore tak­ing the sur­vey and was un­able to give an hon­est guess.

I took the sur­vey and I agree with some other com­ments about the difficulty of as­sign­ing prob­a­bil­ities to dis­tant events. I de­cided to just round to ei­ther 0 or 1% for a few things. I hope “0” won’t be in­ter­preted as liter­ally zero.

Some­thing bugs me about the IQ ques­tion. It’s easy to call sour grapes on those com­plain­ing about that met­ric but it seems like such a poor proxy for what mat­ters, namely, mak­ing awe­some stuff hap­pen. Not deny­ing a cor­re­la­tion, just that I think we can do much bet­ter. Even in­come in dol­lars might be a bet­ter proxy de­spite the ob­vi­ous prob­lems with that.

I think in­come in dol­lars is a much worse proxy for most things that mat­ter than IQ, be­cause it de­pends so much on age and ca­reer choice and where you live and so forth. And how do you know that what Yvain was af­ter was a mea­sure of “mak­ing awe­some stuff hap­pen”?

I think “age and ca­reer choice and where you live and so forth” also cor­re­late with “mak­ing awe­some stuff hap­pen”, and in very similar ways. OTOH, I think IQ is prob­a­bly a de­cent pre­dic­tor of “mak­ing awe­some stuff hap­pen” among peo­ple with same “age and ca­reer choice and where you live and so forth”.

Age is cor­re­lated in two differ­ent ways with mak­ing awe­some stuff hap­pen. (1) There’s pre­sum­ably some peak pe­riod of life in which you’re more likely to do awe­some things. (2) The like­li­hood of hav­ing made some­thing awe­some hap­pen is mono­ton­i­cally in­creas­ing with age. If Yvain were want­ing to mea­sure awe­some­ness—and let me re­peat that I see no par­tic­u­lar rea­son to as­sume that was his goal—then #1 would be of some in­ter­est. But what you get by look­ing at in­come is more like #2.

Ca­reer choice is cer­tainly cor­re­lated both with mak­ing awe­some things hap­pen and with in­come. But, again, in differ­ent ways. For in­stance, if you’re a very clever tech­ni­cally-in­clined new grad­u­ate want­ing to get rich, then fi­nance and law are pretty good choices of ca­reer. Both offer, es­pe­cially if you’re both good and lucky, the op­por­tu­nity to get hold of very large amounts of money. But if those are ca­reers that tend to pro­duce a lot of awe­some­ness, I seem to have failed to no­tice. (Hand­wavy ex­pla­na­tion: To get a lot of money, you need to do things that oth­ers find very valuable. You can do that by cre­at­ing new value, which is hard; or by steer­ing value to­wards the peo­ple who pay you, which is of­ten eas­ier. When some­one work­ing in fi­nance makes his clients rich, it’s usu­ally mostly at other peo­ple’s ex­pense: to buy low and sell high, you re­quire oth­ers to sell low and buy high. Law is some­what similar, though I think it tends to be more about steer­ing anti-value away from your clients.)

There are peo­ple in law who are mak­ing awe­some things hap­pen, but they are not get­ting paid any­where close to as much for it as the ones who are do­ing stan­dard things for deep-pock­eted clients.

True, I was just think­ing that some­thing that cor­re­lates (loosely) with “hav­ing made awe­some stuff hap­pen” might be bet­ter than some­thing that cor­re­lates with “has one of mul­ti­ple skills that con­tribute to the hy­po­thet­i­cal abil­ity to make awe­some stuff hap­pen”.

As for whether “mak­ing awe­some stuff hap­pen” is the right un­der­ly­ing met­ric… what else?

I took the sur­vey. Got New­ton wrong by over 50 years. At least my con­fi­dence was ap­pro­pri­ately low.

I would sug­gest re­quest­ing prob­a­bil­ities in a sim­ple, ex­cep­tion-less way. Why not just ask for a num­ber from 0 to 1? “Use per­centages, but don’t put down the per­centage sign, un­less you’re go­ing be­low 1%, then put the per­centage sign so I know it’s not a mis­take” looks to me like ask­ing for trou­ble.

I’m not sure why it should dis­turb you. If the prob­a­bil­ity of in­tel­li­gent life evolv­ing in galaxy x is the same for all x, and there are about 100 billion galax­ies in our ob­serv­able uni­verse, then the chance of in­tel­li­gent life in the ob­serv­able uni­verse is about 1-(1-x)^100 billion. This as­sumes that whether life evolves in any one galaxy is in­de­pen­dent of whether it evolved in an­other.

I wish I had re­mem­bered to use this for­mula when I took the sur­vey.

On­tolog­i­cally ba­sic = at the low­est level of re­al­ity. For ex­am­ple, a table is not on­tolog­i­cally ba­sic be­cause there are no ta­bles built into the laws of physics; but ar­guably, an elec­tron is on­tolog­i­cally ba­sic, since we can’t ex­plain elec­trons in terms of any­thing smaller or more ba­sic.

A stan­dard claim of “ro­bust” su­per­nat­u­ral­ism is that there are minds (men­tal en­tities) which can­not be un­der­stood in terms of any more ba­sic con­stituents of re­al­ity. E.g., your soul is not made of almi­tons, and god is not made of pixie dust. God is sup­posed to be on­tolog­i­cally ba­sic—he is built right into the low­est level of re­al­ity, no mov­ing parts.

The im­por­tance of mak­ing that caveat is that it might be defen­si­ble to say that per­haps some alien cre­ated us, but that is not re­ally what most peo­ple mean by a god, since pre­sum­ably the alien has a nice (evolu­tion­ary?) causal his­tory.

For my part, I have the same prob­lem with “A vastly pow­er­ful God in­ten­tion­ally cre­ated hu­man life” that I do with “A vastly pow­er­ful alien race in­ten­tion­ally cre­ated hu­man life”; that “God” is on­tolog­i­cally ba­sic and an alien race isn’t doesn’t par­tic­u­larly mat­ter to how se­ri­ously I take those claims. For me to ob­ject to “God cre­ated hu­man life” on the grounds that God is an on­tolog­i­cally ba­sic men­tal en­tity would be to ig­nore what seems to me the much more im­por­tant prob­lem of pur­port­ing to ex­plain phe­nom­ena by posit­ing con­ve­niently pow­er­ful en­tities for whom no other ev­i­dence ex­ists.

In­deed both views have the prob­lem you just spoke of, but the su­per­nat­u­ral view has still an­other deficit, which we might call a failure to ex­plain. When we posit aliens, we posit some­thing which we pre­sume has a causal his­tory in terms of more fun­da­men­tal parts, but when we posit a su­per­nat­u­ral god or the like, we posit some­thing vastly com­plex yet with no parts. It is as if the en­tire text of “Fin­negans Wake” were the 3rd let­ter of the alpha­bet, or as if par­ti­cle physics tried to ex­plain the uni­verse in terms of quarks, lep­tons, and din­ner ta­bles.

There is yet an­other point, which is that the alien “gods” are not what one might call “re­li­giously ad­e­quate.” No­body wants to wor­ship mere fel­low crea­tures, no mat­ter that they might have cre­ated us.

I agree that posit­ing a on­tolog­i­cally ba­sic cre­ator has one more deficit than posit­ing an on­tolog­i­cally non-ba­sic cre­ator. I just don’t think that’s a par­tic­u­larly im­por­tant place to draw the line. Far more im­por­tant to me is the differ­ence be­tween posit­ing an goal-di­rected cre­ator vs. a non-goal-di­rected one, for ex­am­ple. To my mind, posit­ing alien as­tro­nauts who came to Earth in or­der to cre­ate hu­man be­ings is nearly as prob­le­matic as posit­ing a god who did so, and fo­cus­ing my at­ten­tion on the ex­tra deficit in­tro­duced by the lat­ter is not a helpful use of my at­ten­tion.

Re: “no­body wants to wor­ship mere fel­low crea­tures”… I’m not sure if I agree with this, as I’m not ex­actly sure what it means. Let me put it this way: if glow­ing en­tities de­scended from the sky to­mor­row and demon­strated vast pow­ers and claimed to have cre­ated hu­man­ity, I’m con­fi­dent that >15% of hu­man­ity would wor­ship those en­tities. If those en­tities were demon­strated to have in­ter­nal struc­ture and be con­structed from more fun­da­men­tal parts, that pre­dic­tion doesn’t change. Do you dis­agree with ei­ther of those pre­dic­tions?

If those en­tities were demon­strated to have in­ter­nal struc­ture and be con­structed from more fun­da­men­tal parts, that pre­dic­tion doesn’t change.

You cheated! Have them be­gin by wor­shiping some­thing, how­ever you change its na­ture wor­shipers will still fol­low it.

If one of the glow­ing en­tities had an anti-grav­ity pack fail and fell a few hun­dred feet onto as­phalt, rup­tur­ing its flesh, dis­mem­ber­ing its limbs, and burst­ing its car­cass open in a gory rain of blood and giblets on na­tional tele­vi­sion dur­ing first con­tact, you might not get 15%.

I in­fer that you agree with my pre­dic­tions, de­spite con­sid­er­ing the sec­ond one ir­rele­vant to the ques­tion at hand. Con­firm/​deny?

I agree with you that in the case you de­scribe, you prob­a­bly wouldn’t get 15%. I don’t think that has much, if any­thing, to do with the en­tity’s ba­sic on­tolog­i­cal na­ture. I think it has a great deal to do with its demon­strated fal­li­bil­ity and mor­tal­ity, as well as the emo­tional con­se­quences of bloody deaths.

If glow­ing en­tities de­scended from the sky to­mor­row and demon­strated vast pow­ers and claimed to have cre­ated hu­man­ity and claimed to have in­ter­nal struc­ture and be con­structed from more fun­da­men­tal parts, I’m con­fi­dent that >15% of hu­man­ity pre­sented with all of those facts up front would come to wor­ship those en­tities . I in­fer that you dis­agree. Con­firm/​deny?

There are emo­tional con­se­quences to ap­par­ent perfec­tion that we in­tel­lec­tu­ally know isn’t real, so there is no neu­tral frame­work.

claimed to have in­ter­nal struc­ture and be con­structed from more fun­da­men­tal parts

That’s not a com­plete enough back story be­cause they could be the agents of some­thing else. If they don’t say more than this,15% might not wor­ship them as more than an­gels. Let’s say they claim to have evolved from goop, just like all an­i­mals on Earth ex­cept hu­mans, which they claim to have cre­ated. Then, I think “No­body wants to wor­ship mere fel­low crea­tures” ap­plies, though by “no­body” I mean “only cer­tainly tens of mil­lions”, and I’m not too con­fi­dent in the 15% figure.

Agreed that there’s no neu­tral frame­work in the sense I think you mean it: how­ever that meet­ing goes, it has emo­tional con­se­quences.

We’re bounc­ing sev­eral sce­nar­ios around, so to avoid con­fu­sion I will la­bel them… A is where they show up and don’t an­nounce their on­tolog­i­cal na­ture, B is where they show up and take a bloody prat­fall, C is where they an­nounce their non-ba­sic on­tolog­i­cal na­ture, D is where they show up and an­nounce they evolved via nat­u­ral se­lec­tion of ran­dom mod­ifi­ca­tion.

If I un­der­stand what you mean by “wor­ship them as an­gels,” I agree that, in C, most of the wor­ship­pers would likely do that. If that’s not what you meant by “wor­ship” then I might agree with your origi­nal claim; I’m not sure.

I agree that most of the peo­ple who would wor­ship them in C would not wor­ship them in D. If D is what you meant by “fel­low crea­tures” then I prob­a­bly agree with your origi­nal claim.

It oc­curs to me that my re­ply is a lit­tle too qual­i­ta­tive, so I’ll try to put it into the lan­guage of prob­a­bil­ity. I have a prior on the idea that aliens cre­ated us; it is very low (maybe 100,000:1) but I feel quite cer­tain that the propo­si­tion is phys­i­cally mean­ingful, and if you handed me ev­i­dence I would gladly up­date in that di­rec­tion. On the other hand, it is not im­me­di­ately ob­vi­ous to me that the idea of a su­per­nat­u­ral god is phys­i­cally or in­deed log­i­cally mean­ingful. I’ll still grudg­ingly quote you a prior, but with a sink­ing feel­ing in my stom­ach.

Took the sur­vey and was quite un­sure how to an­swer the god ques­tions… If we took it, for ex­am­ple, that there’s 30% chance of uni­verse be­ing simu­lated then the same prob­a­bil­ity should be as­signed to P(God) too and to P(one of the re­li­gions is cor­rect) as well.

I can un­der­stand say­ing that “the uni­verse is a simu­la­tion” im­plies “there is a god” for a deis­tic defi­ni­tion of god. But why would it im­ply that one of the re­li­gions is cor­rect? Do you count deism as a re­li­gion?

Well, we en­ter the prob­lem of “defi­ni­tion of god” right now. Does the tree that falls in a for­est with no one to listen makes a sound ? Depends if “sound” is “vibra­tion of the air” or “acous­tic sig­nal in a brain”. The same goes here. If the uni­verse is a simu­la­tion, there is a “god” if a “god” is “a con­scious en­tity that cre­ated the uni­verse”, but not if a god is “an omni-pow­er­ful om­ni­scient en­tity that ex­isted for always” or any­thing else that most re­li­gions stick in the “god” word. And if “god” is an on­tolog­i­cally sen­tient en­tity that can’t be re­duce to non-sen­tient com­po­nents, then it’s un­likely that the cre­ators of the simu­la­tion are like that, but not to­tally im­pos­si­ble (since the hy­poth­e­sis space of how the “real uni­verse” would be is very large).

If you un­der­stand for always as ‘ever since this uni­verse has ex­isted’, om­ni­scient as ‘who knows ev­ery­thing about this uni­verse’, etc., then a simu­la­tor would pretty much qual­ify as a god un­der that defi­ni­tion.

I wouldn’t say that a simu­la­tor is om­ni­scient about its con­tent. It’ll know all the po­si­tions of quarks and ev­ery­thing, but that’s not be­ing om­ni­scient in the sense that is given by ma­jor re­li­gions for God. An “om­ni­scient God” as stated by the­ists doesn’t only know the ex­act quan­tum state of my brain, but also what it means in term of ac­tual thoughts, know­ing how to in­ter­pret that ex­act con­figu­ra­tion as me be­ing dishon­est or what­ever. I doubt much simu­la­tors have that level of aware­ness on their con­tent. It is the­o­ret­i­cally pos­si­ble to build one which does have it, but it’s not a cer­tainty at all that a simu­la­tor will have it.

If this uni­verse is com­pletely re­duc­tion­is­tic, which a simu­la­tion prob­a­bly would be, then your “ac­tual thoughts” (and the ex­is­tence of trees, etc.) are log­i­cal im­pli­ca­tions of the con­figu­ra­tion. Does an en­tity with log­i­cal un­cer­tainty still count as om­ni­scient? But then we’ve got­ten into defi­ni­tions again.

I still don’t know whether you, per­son­ally, think a deis­tic god im­plies that one or more re­li­gions is true. It doesn’t par­tic­u­larly mat­ter, though. Your origi­nal point that the an­swer to the god ques­tion de­pends on the an­swer to the simu­la­tion ques­tion is a good one.

Don’t be. It’s not like know­ing that score will ac­tu­ally open any doors for you or con­strain your an­ti­ci­pa­tions in any mean­ingful way; in all like­li­hood you already know what prob­lems you’re smart enough to tackle to a much greater pre­ci­sion than an in­te­ger in the range 0-~160 can pos­si­bly give you.

I only know it be­cause I was tested at my par­ents’ or school’s be­hest in child­hood. I cer­tainly wouldn’t pay for it as an adult.

I don’t feel like it’s em­bar­rass­ing to know it—why em­bar­rassed? (I re­mem­ber first learn­ing mine by over­hear­ing my par­ents talk­ing about it.) It might be em­bar­rass­ing if you put too much weight on it over prac­ti­cal abil­ity, or if you waved it around as a sub­sti­tute for con­vinc­ing ar­gu­ment. But I don’t see too much cause for em­bar­rass­ment in sim­ply know­ing it.

I as­sumed that was more based on cul­tural norms than LW norms. Gen­er­ally peo­ple don’t dis­cuss their IQs in po­lite com­pany (or po­ten­tially-high-var­i­ance-IQ com­pany, maybe), es­pe­cially high IQs, be­cause of the risk of be­ing seen as brag­ging about some­thing that other peo­ple may not view as high-sta­tus. In dis­cus­sions out­side LW I’ve heard peo­ple be some­what con­de­scend­ing to­ward peo­ple who even ad­mit to hav­ing got­ten their IQs tested, as it’s of­ten as­so­ci­ated with in­tel­lec­tual pre­ten­sion. (And, in turn, be­ing seen as claiming high sta­tus in a way that ac­tu­ally marks one as low-sta­tus is as­so­ci­ated with so­cial un­aware­ness.)

Some of the ques­tions made me feel a bit stupid, which is prob­a­bly a good thing now and then. Had to an­swer Deist/​etc. for the re­li­gious iden­tity ques­tion, be­cause there wasn’t an op­tion for epistemic un­the­ist with Chris­tian eth­i­cal heuris­tics and an ad­mit­tedly in­defen­si­ble level of wish­ful think­ing. But “etc.” will do :)

Here’s hop­ing we all live to 2100 and find out whether we were right about that stuff.

I think the prob­a­bil­ity of 90% die-off by 2100 at­tributable to a sin­gle cause is low, but let’s face it, an in­ter­con­nected cas­cad­ing cluster­fuck of 5%-fatal catas­tro­phes would be bad enough, and sadly I think that’s more likely. Or I’ve been read­ing too much Jared Di­a­mond.

I think there is a differ­ence be­tween “I have looked over all the ev­i­dence in­tensely and find the ev­i­dence and counter-ev­i­dence to weigh pre­cisely in bal­ance such that my es­ti­mate of the prob­a­bil­ity of event X is 50%” and “I don’t know any­thing about X, so I will de­fault to 50% even if it isn’t rea­son­able”.

It’s the differ­ence be­tween “I know fair coins pro­duce heads 50% of the time” and “what’s a fair coin?”. I wanted the sec­ond op­tion when talk­ing about many wor­lds—I just haven’t read the se­quence on quan­tum me­chan­ics yet, and I haven’t read any­thing out­side the se­quences on quan­tum me­chan­ics ei­ther. I just have an ed­u­cated lay­man’s un­der­stand­ing.

Sur­veys always need more re­spon­dents. When Wikipe­dia or Red­dit want to pub­li­cize things, we/​they use a bar at the top of the page. Can we do that? (It doesn’t have to be as ob­nox­ious as the dona­tion fundraiser ones WP uses!)

I took the sur­vey late last night af­ter first notic­ing the post­ing here. Un­for­tu­nately, I was so tired that I for­got the in­struc­tion to use dou­ble digit an­swers and re­mem­bered it a few min­utes af­ter hit­ting the “Sub­mit” but­ton. (Here come the down votes.) If Yvain can iden­tify my sub­mis­sion, put a “0″ be­fore all sin­gle digit an­swers. If not, con­tact me pri­vately and I’ll provide some help iden­ti­fy­ing it. I lurk and never com­ment here be­cause frankly you are all more in­tel­li­gent than I am. But I do want to im­prove my ra­tio­nal think­ing skills so here I am.

Thanks for con­duct­ing this new sur­vey, Yvain. I ea­gerly await the re­sults.

Slightly off-topic, it would be in­ter­est­ing to see how mem­bers of this com­mu­nity re­spond to the PhilPapers sur­vey. (You must be reg­istered to take the sur­vey.) My own re­sponses can be found here.

We should make a thread in the dis­cus­sion fo­rum for all high school stu­dents to in­tro­duce them­selves and get ad­vice on how to nav­i­gate the idiocy that is our ed­u­ca­tion sys­tem and ad­vice on what to study in or­der to get more in­volved with tran­shu­man­ism. I need one more karma to make the post...

Took the test. I as­signed 70-80% to “God cre­at­ing the uni­verse”, as I strongly (80%) sus­pect that it’s a simu­la­tion, it’s be­ing more or less ac­tively con­trol­led and ma­nipu­lated by some out­side en­tity/​en­tities, and even if said en­tity is one of many and has com­par­a­tively lit­tle power over its na­tive en­vi­ron­ment—even to the point of re­sem­bling a hu­man sci­en­tist—it’s pretty much pointless for us to call it any­thing but a god.

I had been un­der the im­pres­sion that IQ = men­tal age /​ phys­i­cal age. I’m not sure how to un­der­stand a test that doesn’t ask how old one is.

I also just tried that test and got a score that I am pretty sure is ~20 lower than the one I took as a small child (though I can’t be sure since my par­ents de­clined to tell me ex­actly how I scored at the time).

Depends on the test. E.g. some IQ tests mea­sure the size of your vo­cab­u­lary. IIRC, the rea­son why this works is that peo­ple with a higher IQ tend be to quicker at learn­ing the mean­ing of a word from its con­text, and there­fore ac­cu­mu­late a larger vo­cab­u­lary. That makes the size of your vo­cab­u­lary ad­e­quate as a rough proxy for IQ—but only within your age group, since peo­ple older than you have had more time to ac­cu­mu­late a large vo­cab­u­lary.

I started try­ing to fill this out, but more than half I ei­ther don’t know/​re­mem­ber, am to un­sure about the sup­posed mean­ing of the ques­tion and would re­quire clar­ifi­ca­tion, or can’t an­swer mean­ingfully be­cause the USA cen­tric as­sump­tions of the ques­tion.

Maybe the next time the sur­vey should not al­low prob­a­bil­ities ex­actly 0 or 1 (rather than say­ing they’ll be in­ter­preted as 0+ep­silon and 1-ep­silon), and give the op­tion to ex­press prob­a­bil­ities as log-odds if they’re ex­treme. (Any­way, I didn’t give any prob­a­bil­ity lower than 0.1% or higher than 99.9% in my an­swer.)

Sur­vey com­plete. Had to an­swer “there’s no such thing as moral­ity” be­cause I can’t imag­ine a con­figu­ra­tion of quarks that would make any of the other choices true. What would it even mean at a low level for one nor­ma­tive the­ory to be “cor­rect?”

A quark, or a con­figu­ra­tion of quarks, or defin­able in terms of con­figu­ra­tions of quarks. Pre­sum­ably oc­clude re­ally meant (or per­haps would have meant, given more knowl­edge of physics) “el­e­men­tary par­ti­cles”, since not all el­e­men­tary par­ti­cles are quarks; or some­thing more com­pli­cated in­volv­ing quan­tum fields. With such fixes in place, it doesn’t seem to me like a fully-gen­eral ar­gu­ment against (for in­stance) com­put­ers or peo­ple or minds or sym­phonies, but it still has some force against moral re­al­ism.

Yeah, but to be flip, what does “agree” mean? What po­si­tion you find most in­tel­lec­tu­ally co­her­ent? What you use to reg­u­late your own be­hav­ior? What you use to form so­cial judg­ments of be­hav­ior? I put down “con­se­quen­tial­ism,” but I could have put down “virtue ethics” or “there’s no such thing as moral­ity” if I were us­ing a differ­ent frame.

Ac­tu­ally, you can tell some­one what to do on the ba­sis of it be­ing “wrong” or “right”; the only re­quire­ment is that their moral­ity/​prefer­ences are similar to your own. If you can con­vince them that their ac­tions are con­trary to their own moral prefer­ences, you could man­age to con­vince them to do that which you both con­sider to be “right”.

But, if you meant that it is im­pos­si­ble to de­ter­mine what some­one should do by means of a uni­ver­sal set of moral rules, then yea, clearly not. But the ab­sence of a uni­ver­sal moral­ity does not im­ply an ab­sence of all moral­ity.

That’s not the ques­tion. The ques­tion is which ide­ol­ogy you most iden­tify with. So what you an­swered is “The philos­o­phy I most iden­tify with is that there is no such thing as moral­ity.” This seems like a non­sen­si­cal po­si­tion since it would im­ply that con­cepts don’t ex­ist sim­ply be­cause they aren’t phys­i­cal. Mo­ral­ity is a very real part of the uni­verse as it can be ob­served in the func­tion­ing of the hu­man brain.

Ad­mit­tedly, I did find the ques­tion some­what odd, as what is asked is what I most iden­tify with, and it’s a very bad habit to make ide­olo­gies part of your iden­tity. I in­ter­preted the ques­tion as “which form of moral­ity do you ap­prove of the most”, which for me was con­se­quen­tial­ism since out of those three I be­lieve it to be the most effec­tive tool for im­prov­ing hu­man welfare.

I in­ter­preted the ques­tion as “which form of moral­ity do you ap­prove of the most”, which for me was con­se­quen­tial­ism since out of those three I be­lieve it to be the most effec­tive tool for im­prov­ing hu­man welfare.

You also judged the al­ter­na­tives on con­se­quen­tial­ist grounds. I in­ter­preted the ques­tion as “which form of moral­ity do you use to de­cide what to do (or wish you used to de­cide what to do)?”

Good catch! I should have added “and im­prov­ing hu­man welfare is more im­por­tant to me than any other con­sid­er­a­tions”.

Any­way, I think moral­ity is more than just “how do you de­cide what to do”, it’s about what you feel peo­ple in gen­eral should do. And in that case I would pre­fer ev­ery­one to use con­se­quen­tial­ism, even though that isn’t strictly how I make my own de­ci­sions.

Mo­ral­ity is a very real part of the uni­verse as it can be ob­served in the func­tion­ing of the hu­man brain.

I try, of late, not to cre­ate sec­tions of map that don’t cor­re­spond to any ter­ri­tory. What if we taboo the word moral­ity? Is there brain func­tion that cor­re­sponds to moral­ity and that is dis­tinct from prefer­ences, be­liefs, emo­tions, and goals? It seems that posit­ing the ex­is­tence of some­thing called moral­ity cre­ates some­thing ad­di­tional and un­nec­es­sary.

It does cor­re­spond to ter­ri­tory: that spe­cific func­tion­ing of the hu­man brain. Hu­man prefer­ences are not part of the map, they’re part of the ter­ri­tory. Ad­mit­tedly, you can de­scribe the same thing us­ing differ­ent words, but that’s true for ev­ery­thing. Mo­ral­ity is a sub­set of prefer­ences in that it only cov­ers those prefer­ences that de­scribe how in­tel­li­gent agents should act. It is still a use­ful term for that rea­son.

I have found how­ever that talk of moral­ity leads to enor­mous amounts of con­fu­sion (fake agree­ments, fake dis­agree­ments, etc.) and so I agree that taboo­ing the word and sub­sti­tut­ing the in­tended mean­ing has a great deal of merit.

The poli­ti­cal ques­tion ought to have a “liber­tar­ian so­cial­ism” an­swer (green/​south­west­ern quad­rant in The Poli­ti­cal Com­pass; ex­treme ver­sion de­scribed in An Anar­chist FAQ). I an­swered “So­cial­ist, for ex­am­ple Scan­d­i­na­vian coun­tries” be­cause it was the least un­satis­fac­tory one. (Or at least there should be a “None of the above” an­swer.)

ETA: BTW, that’s prob­a­bly the most com­mon un­der­stand­ing of the word liber­tar­ian out­side the US.

In­ter­est­ing link… it seems like they would do well to have a sec­tion de­voted to jar­gon—I’ve heard peo­ple talk about be­ing against “prop­erty” be­fore, but had never en­coun­tered a de­scrip­tion of the dis­tinc­tion be­tween that and var­i­ous other sorts of rights to use and pos­ses­sion.

The dis­tinc­tion is stan­dard in Marx­ism. From The Com­mu­nist Man­i­festo:

The dis­t­in­guish­ing fea­ture of Com­mu­nism is not the abo­li­tion of prop­erty gen­er­ally, but the abo­li­tion of bour­geois prop­erty. But mod­ern bour­geois pri­vate prop­erty is the fi­nal and most com­plete ex­pres­sion of the sys­tem of pro­duc­ing and ap­pro­pri­at­ing prod­ucts, that is based on class an­tag­o­nisms, on the ex­ploita­tion of the many by the few. In this sense, the the­ory of the Com­mu­nists may be summed up in the sin­gle sen­tence: Abo­li­tion of pri­vate prop­erty.

[...]

To be a cap­i­tal­ist, is to have not only a purely per­sonal, but a so­cial sta­tus in pro­duc­tion. Cap­i­tal is a col­lec­tive product, and only by the united ac­tion of many mem­bers, nay, in the last re­sort, only by the united ac­tion of all mem­bers of so­ciety, can it be set in mo­tion. Cap­i­tal is there­fore not only per­sonal; it is a so­cial power. When, there­fore, cap­i­tal is con­verted into com­mon prop­erty, into the prop­erty of all mem­bers of so­ciety, per­sonal prop­erty is not thereby trans­formed into so­cial prop­erty. It is only the so­cial char­ac­ter of the prop­erty that is changed. It loses its class char­ac­ter.

It does not seem like this is ac­tu­ally draw­ing out the dis­tinc­tion I was refer­ring to. Or at least, as much as it is at­tempt­ing to, it is as­so­ci­at­ing var­i­ous du­bi­ous con­cepts with the dis­tinc­tion, like “class” and “class an­tag­o­nisms” and “ex­ploita­tion”. But then, that pas­sage mostly reads like word soup to me.

The worry, when some­one talks about abol­ish­ing prop­erty, is that one is thereby de­priv­ing the in­di­vi­d­ual of rights on a stan­dard Lock­ean anal­y­sis. As these sorts of so­cial­ists would agree, it is im­por­tant that the worker con­trol the des­tiny of the prod­ucts of one’s own work. This is iden­ti­fied with the nat­u­ral right to prop­erty, and fol­lows straight­for­wardly from the rights to life and liberty.

A “use/​pos­ses­sion” non-prop­erty right seems to sup­port the Lock­ean right to prop­erty, but only un­til the prop­erty is “re­leased into the wild”—thus, pre­sum­ably, some­one else can­not just walk away with my com­puter, be­cause I need it for my work, but I also can’t just lock it up in a closet so no­body can use it. Similarly, I could main­tain the right to the farm that I work, but could not ex­er­cise the right to pre­vent oth­ers from farm­ing a plot of land I was not go­ing to use.

I think there are still some se­ri­ous prob­lems with this pic­ture from sev­eral differ­ent an­gles, but it’s nonethe­less an in­ter­est­ing no­tion of prop­erty.

“The mod­ern di­vi­sion of la­bor links to­gether most ev­ery­one on the planet in a tremen­dously com­plex, co­op­er­a­tive web of re­la­tion­ships. Let’s call stuff that peo­ple can use in­di­vi­d­u­ally per­sonal prop­erty and stuff that a great num­ber of peo­ple need to co­op­er­ate in or­der to use means of pro­duc­tion, and when these means of pro­duc­tion are ac­knowl­edged as the prop­erty of in­di­vi­d­u­als, let’s call them pri­vate prop­erty. Own­er­ship of pri­vate prop­erty does cor­re­spond to the set of those who work the pri­vate prop­erty to pro­duce wealth; in­stead, a sub­set of peo­ple have con­trol over these means of pro­duc­tion, al­low­ing them power over those who do not. We com­mu­nists don’t want to get rid of per­sonal prop­erty; in­stead, we want to con­vert the means of pro­duc­tion from pri­vate prop­erty to some pub­lic kind or an­other.”

The best overview of the tech­ni­cal mean­ing of “ex­ploita­tion,” at least in the later Marx, can be found here. (By con­trast, I think I’d need to know what you find du­bi­ous about the con­cept of class to bet­ter ex­plain it, since there’s no sin­gle tech­ni­cal defi­ni­tion of class within Marx­ist dis­course and the range of them doesn’t wan­der very wildly from the nor­mal English use of the term, which I as­sume you’re perfectly fa­mil­iar with.)

Sure, that pretty well matches both my pre­vi­ous un­der­stand­ing from my study of Hegel/​Marx and what I’d writ­ten above. The slip­pery part of this is the dis­tinc­tion be­tween “pri­vate prop­erty” and “per­sonal prop­erty”, and ex­actly what qual­ifies as which (and who gets to de­cide), and what hap­pens to my per­sonal prop­erty when I find it has be­come a “means of pro­duc­tion”.

I was not ex­press­ing lack of un­der­stand­ing re­gard­ing words like “class” and “ex­ploita­tion” when I called them “du­bi­ous”. I heartily recom­mend Hegel’s de­scrip­tion of “ex­ploita­tion” (from Phenomenol­ogy of Spirit—the lord and the bonds­man) over Marx’s—Marx is ba­si­cally just Hegel plus bad eco­nomics.

At any rate, I’m not par­tic­u­larly in­ter­ested in hash­ing out any of this stuff on this fo­rum—I had just found it in­ter­est­ing that there was a no­tion of prop­erty amongst “liber­tar­ian so­cial­ists” that seems very nearly com­pat­i­ble with the Lock­ean nat­u­ral-rights anal­y­sis (and that I had not heard pre­vi­ously).

I just put to­gether a dis­cus­sion post about think­ing about the prob­a­bil­ity of liv­ing in a simu­la­tion, but I’m not sure if I should ask peo­ple to fill out the sur­vey (if they were plan­ning to) be­fore they read the post.

Prob­a­bil­ities? Makes no sense. But yes, a scaled re­sponse for each would be nice, since I’d say I’m about 40% con­se­quen­tial­ist, 10% de­on­tol­o­gist, 1% virtue, and 49% ir­ra­tional about moral­ity.

Here is what I thought it said: Which of these do you think is true? What it re­ally said was: Which of these do you iden­tify with? (I must have pat­tern matched on the ques­tion when I saw the form of the an­swers.)

So your re­ply makes sense. Still, I would rather the ques­tion had been worded differ­ently. Which of these do you use in prac­tice when mak­ing moral choices? Which of these do you think best ex­plains moral­ity?

To me, “iden­tify” is an evil word. Be­cause you could let some­thing into your iden­tity you didn’twant.

First, “global warm­ing” isn’t quite the same thing as cli­mate change. This is kind of a dis­tinc­tion with­out a differ­ence, per­haps, but I find in many com­mu­ni­ties (not LW) that the se­man­tic dis­tinc­tion be­tween these terms causes con­fu­sion.

Se­cond, and more im­por­tant to me: sup­pos­ing that N% of cli­mate vari­a­tion over time is ac­counted for by hu­man ac­tivity, the word­ing of the ques­tion al­lowed some am­bi­guity be­tween (N > 50) and (N is non-neg­ligible). I’m fairly con­fi­dent that N is non-neg­ligible, which seems like the im­por­tant ques­tion for policy pur­poses. I’m not con­fi­dent that N > 50.

Well, it could mean that you think the cli­mate is go­ing to get colder, or that the mean tem­per­a­ture will re­main con­stant while spe­cific re­gions will grow un­usu­ally hot/​cold, or that the planet will un­dergo a pe­riod of hu­man-caused warm­ing fol­lowed by ice sheets melt­ing and then cool­ing or any num­ber of other the­o­ries. Most of them are fairly un­likely of course, but P(any cli­mate change at all) > P(global warm­ing).

I took the sur­vey, some­time last week I think. EDIT: I think I may also have messed up the “two-digit prob­a­bil­ities” for­mat­ting re­quire­ment. I can’t re­call speci­fi­cally any an­swer that might have vi­o­lated it, but I also don’t re­call pay­ing at­ten­tion to that re­quire­ment while an­swer­ing the sur­vey.

I missed New­ton by a hor­ren­dous ~25 years. If the pub­lish­ing year of “On the Ori­gin of Species” had been asked in­stead, I would have known the ex­act date. Oc­ca­sion­ally I even cel­e­brate it a lit­tle… jeez, ex­actly 3 more weeks un­til the day Dar­win’s ex­pla­na­tion de­stroyed the sin­gle “good” ar­gu­ment re­li­gions ever had. A very fit­ting oc­ca­sion to grab a beer and stick it to the in­visi­ble man.

Also, I was glad the in­put fields were large enough to ac­com­mo­date enough ze­roes re­gard­ing the su­per­sti­tion and re­li­gion ques­tions. I also left out most other prob­a­bil­ity es­ti­mates be­cause I couldn’t an­swer them in any sen­si­ble fash­ion, which once again re­minds me of all the blank spots on my map. I re­ally should come back here more of­ten...