OT8: Love Is An Open Thread

This is the semimonthly open thread. Post about anything you want, ask random questions, whatever. Also:

1. In case you missed the belated announcement last Open Thread, Ozy has a blog again. Since I ban discussion of race and gender in these open threads, each semimonth Ozy runs a concurrent Race And Gender Open Thread, complete with concurrent race-and-gender-related puns. And here one is now.

2. Thanks to everyone who comments with “Why would you bother writing about this? It’s so obvious!”. You have helped me see the light, and in the future I will make sure to only post things that I am certain zero of my several thousand readers already know.

3. The Less Wrong survey might close this weekend if Ozy and I feel up to starting the statistics on it then, so if you haven’t taken it yet now might be a good time to go over there and get started.

4. If you’ve taken the LW survey, you’ve already marked down whether you read this blog or not so I have information about you. If you haven’t, I would like to get some information about you to see what kind of people are here, who I have and haven’t scared off, and increase my sample size for some correlations I’m going to try to get off of the LW survey. So here is a Slate Star Codex Survey [EDIT: Now closed! Do not take!] for you. Remember, if you’ve already taken the LW survey, do not take this one too!

SSC has a heavily 20-something audience, no? That’s when you’re in the *most* need of spoiler protection for kids’ movies. I personally didn’t see anything animated in between the “I’ve become too cool for this!” phase in my teens and the “Wait, some of these are pretty good!” phase when kids came along.

Comments on the SSRI thread are closed for some reason, so I’ll ask here instead. Scott, can you (or anyone else with some medical/biological credentials) comment on the effectiveness of St. John’s Wort in treating anxiety and depression? I’m not a proponent of herbal/alternative medicine, but I was on SSRIs before and had some rather unpleasant side effects. Now my anxiety is quite bad again and I’m probably going to have to go on something, so I was wondering if St. John’s Wort might be a workable alternative. Some studies seem to indicate that it is about as effective as SSRIs, but with potentially fewer side effects. Apparently it’s the most commonly prescribed “drug” for depression in Germany.

Also, I’ve read on some sites that SSRIs have been linked to a number of cardiovascular issues. The sites making these claims didn’t strike me as very reliable, but it is a big concern of mine as I have a family history of heart disease.

I will of course be talking to my psychiatrist about these things, so I’m not asking for medical advice on what I should be taking. I just want more information so I can be sure to ask the right questions about whatever the psychiatrist suggests.

German studies are more positive than studies anywhere else, unclear why but may actually reflect a different preparation of the plant being used in that country. SJW (uh, unfortunate abbreviation coincidence there) has worse study results and is more potentially dangerous/interactive than some of your other choices.

If you haven’t tried bupropion (Wellbutrin) I would recommend that next. If neither SSRIs or bupropion work for you, then you might qualify for a tricyclic (maybe bad idea depending on your heart problems), and then you can start talking about antipsychotics or MAOIs. I would go through all of those and a few more before I started trying SJW. And I’m not just saying that as a Big Pharma shill – if you want to try supplements, I’d go with SAMe or folic acid first.

The main SSRI/heart disease link is increasing something called the QT interval, which when it gets too long causes dangerous arrythmias. This is only really taken seriously with Celexa – although the others can cause it it’s a pretty miniscule effect unless your heart is really really looking for an excuse to break. If you’re concerned, it’s very easy to test your QT interval with a simple EKG that can be done at any doctor’s office – if it’s below a certain point, the chance of an SSRI pushing it into the danger zone is very low. Unless your family’s problem specifically involves this QT interval, I wouldn’t worry much. If your family’s problem is heart attacks, SSRIs actually help slightly – currently unclear how much of this is due to some kind of mind-body effect of being less depressed, versus serotonin having an independent effect on platelets.

[disclaimer: not official medical advice, check with your doctor first, I think most of this is true but haven’t double-checked]

SNRIs are an alternative too, and often are a little more gentle when it comes to side effects. Chlorpheniramine is sold over the counter as an antihistamine, but actually functions as an SNRI at the OTC dose; go for the 12-hour extended release rather than the 4 hour pills. A related compound, Bromopheniramine, was developed into Zimelidine, the first SSRI, by Arvid Carlsson, who won the Nobel Prize for this work in 2000.

I have a friend who knows several people on various medications (including herself), and she thinks SSRIs are terrible and SNRIs are better. She says she’s known several people who did well on SNRIs who were only hurt by SSRIs.

I don’t trust her conclusions (I think she extrapolates from these anecdotes to strong negative conclusions about SSRIs without sufficiently weighting evidence from studies), but the raw anecdotal data is probably accurate.

I considered inviting her to comment here but the whole thing is so triggering for her that I think it’d just make her sad or anxious.

Thanks. That makes me feel much more at ease about trying another SSRI. The heart problems in my family have indeed been heart attacks. I’ve had EKGs and my QT interval was fine, so I guess I shouldn’t be concerned. I’ll definitely discuss the options with my psychiatrist though. I really appreciate you providing this information!

My reading suggests that beta blockers are mostly effective by means of preventing you from getting stuck in a feedback loop of physical panic symptoms, which seems to imply that they’re less helpful for people whose anxiety isn’t bad enough to cause much in the way of physical symptoms (and therefore probably wouldn’t be helpful to me, at least). But I can imagine this being either totally wrong or effectively wrong because actually people who think their anxiety doesn’t have physical symptoms are just really good at pretending it doesn’t. What do people know about beta blockers for anxious people who don’t often have racing heart/hyperventilation/all that good stuff and just worry a lot?

If you can do stressful things and observe that your heart rate and breathing continue at about the normal speed, it seems reasonable to assume that these specifically aren’t mediating anxiety.

I see that there are plenty other vague symptoms that aren’t so easy to rule out, but I don’t know what it would even mean to have symptoms like “feeling lightheaded” or “clammy hands” or “numbness or tingling” or whatnot without actually experiencing them.

Bupropion (Wellbutrin) changed my life for the better in a number of ways. I had no intent to quit smoking when i started on it a decade ago. Little did I know bupropion is Zyban, and so, against my will, my desire to smoke was promptly extinguished, even as the embers of my desire to desire to smoke smolder on to this day.
Its a little “speedy” which is a good thing if you’re naturally sluggish. It doesn’t cause weight gain and there’s no sexual dysfunction.
It might not be the right choice if you’ve got a straight anxiety problem or you’re a naturally jittery person.
This is the best site i’ve found for info on meds:http://www.crazymeds.us/pmwiki/pmwiki.php/Meds/Wellbutrin

“Anything that may work (and even stuff that doesn’t) will have side effects.”

When a herb has been used for centuries any important side effects have probably been noticed and noted. If it meant, check a good herbal reference for what the known side effects of any particular herb are, yes, good idea. But that web site seems to be using it as a blanket dismissal, without reference to herbalist sources on the side effects.

The best pair of comments to get as a blogger are “This is so obvious, I don’t know why you bothered to post it” and “This is so obviously false, I don’t know how you could possibly have posted it.” Bonus points if they’re both angry at you rather than arguing with each other.

I once wrote an academic paper for which reviewer #1 said that “section X is a rather obvious application of standard theory, recommend shortening it or cutting it entirely” and reviewer #2 said that “section X is really fascinating, I’d love to read more of it”.

[Tarski] tried to publish his theorem in the Comptes Rendus Acad. Sci. Paris but Fréchet and Lebesgue refused to present it. Fréchet wrote that an implication between two well known propositions is not a new result. Lebesgue wrote that an implication between two false propositions is of no interest.

Since we’re now asking Scott for medical advice: What do I do if the primary symptom of my ADHD is forgetfulness? I mean, I don’t know of any drugs or other therapies that specifically target this particular symptom.

(P.S., Thank you for blogging [even stuff that’s obvious to other readers] and belated happy birthday.)

First of all, although I know what you mean, it is legally important that I clarify that nothing here is “medical advice” in the traditional sense, just a discussion among Internet friends about certain medical issues which you should talk to your doctor about before taking seriously.

That having been said, there are a lot of nootropics that claim to target memory, and a few that even have a little bit of support. See for example here.

I want to avoid shooting my mouth off too much about things I am only vaguely familiar with, so I’ll stop there. Sarah might know more. And if you contact me by email, I might be able to give you a couple of hints where else to look.

My memory for factual information is pretty good. By “forgetfulness,” I do mean executive dysfunction/ limited working memory. (I call it having narrow mental bandwidth.) (Sometimes I call it “time dyslexia.”)

I’m a little confused as to why a good ol’ stimulant wouldn’t work. Certainly worked for me. We’re not so good at bioengineering that we can “target” specific symptoms like that, I don’t think.

Also, what I find helps is watching more attentive people and trying to “import” some of their habits. You might never be able to stay on top of things, but you can automate some habits that make your life easier. For example, I basically learned how to get ready and out the door entirely from one person I dated who was really good/efficient at it. I’d watch them do it, try to follow along, and within a couple of years I internalized those habits enough that I could do it in a reasonable amount of time by myself, without forgetting roughly two thousand things at home every morning. There are simpler ones, too, like trying to cultivate a habit of checking your surroundings for bags, etc whenever you get up from a chair. I think the trick is to think of them as dumb, brute heuristics rather than active expenditures of mental effort.

Can you point me to some examples? I’m curious to see what you’re seeing.

I’m in that weird place where something seems obvious to you and then someone says otherwise and you’re just really confused. I’ve always thought/assumed that all symptoms of ADHD were reducible to poor executive function.

For NaNoWriMo, I’m writing a book which shows, in a series of semi-connected episodes, how in the future, a community that I consider “good” (in terms of its values, its culture of relating interpersonally, its epistemic humility, and its mechanisms for preventing/solving commons-style problems) comes into being. (I’m quite confident it’s one what Scott and most readers here would also consider good.) And I’m trying to make sure that it uses no technology that’s not available today, doesn’t violate any assumptions about human nature that I know of, and is actually possible and probably in reality. (This social group is not, and has no interest in becoming, a state; it’s a purely civil gathering.)

The first episode/part is the story of how one person (who I’m calling the Ecclesiologist, for now) founds a network of autonomous parish-sized groups (whose size is restricted to between four and one hundred and fifty people) by pitching this idea to people on the internet, and then starting with meetup groups where the concentrations are highest (currently, I guess the major population centres of the USA), and making sure that the initial members are very well-selected.

(The Ecclesiologist also explicitly seeks to make sure that entry into and exit from such groups is frictionless, and the cost of founding a new group is small to negligible; and their voluntary nature and cheap exit is an explicit core value of such groups.)

I intend this first part to also be a “how-to” manual for how such social groups can be founded today, including listing the core rules, and problems that I foresee and how they’re overcome.

The rest of the story is about how these cells then go on to be framework within which the rest of the advances towards creating this “good” society unfold. I haven’t thought them through completely, but some examples of what these advances are: changing education by incorporating what we know about what works and what doesn’t; how sports are treated very differently, are themselves very different from the sports today, and have a very different culture surrounding them; how a better understanding of the brain allows the construction of tools that help people to understand and overcome what were hitherto called “moral” problems, such as “laziness” or “forgetfulness” (like the comment above, I, too, am “forgetful” unless I stress about something a lot, and it’s not in my control, but people still assume I forget things because I don’t care about them), and defeat akrasia; how community norms change to first raise the truthfulness waterline, then act against misleading advertising, then emotion-based advertising, and finally misleading or “polite fictions” in interpersonal interactions; and many more.

I’d appreciate any suggestions about, ideas based on, or (constructive) criticisms of this premise.

I thought “damn, I wish I’d thought of that” after I read your post, so that’s probably a good sign for the quality of the hook.

As for constructive criticism, I find that most utopian fiction tends to contain only characters who are all weirdly similar, as though written by someone who thinks that the real problem is that everybody isn’t just like them. A utopia that doesn’t appeal to different kinds of people isn’t a utopia.

This occurred to me when I was thinking of how to write characters, and realised that I was just writing myself into too many people.

So I decided to deal with it by stealing characters from real life (suitably modified, of course), mostly either people I know, or frequent/famous commenters on this (and other) blogs.

Some more backstory as to how I deal with this (and some other) problems: one of the Ecclesiologist’s motives in doing so is to capture the sense of community he had experienced in the years before he migrated to the USA (he’s an immigrant to the USA; I’m comfortable with writing that bit of myself into him), but found sorely lacking in the atomised life of this country; and he wanted a community based on the ideals that drew him here (classically freedom, justice, and fairness, and the modern rationalist virtues of honesty, scientific thinking, rational thinking, epistemic humility, niceness and community, and co-ordinating to beat commons-type problems) instead of either religion or politics, both of which he found required him to sacrifice his integrity. What community appeals most to him? Something like the rationalist community, or the commentariat of this blog, but more easy-going.

Additionally, to deal explicitly with the problem you’re talking about, I’m going to follow only one strand of development – the one that I’m interested in – while portraying it as one among multiple different such strands that have diverged since the founding, and all of which combined are only a small (but disproportionately significant) part of the wider societies in which they exist.

I doubt very much any kind of departure is frictionless, but that’s my natural pessimism.

If John and Mary join UtopiaGroup because they believe in the ideals, find the other members not alone tolerable but likeable, and work for the aims of the group, then one or three or ten years later find they want to leave, why do they want to leave? What has changed? How are they/the others different?

Leaving behind friends and community that you have outgrown/they’ve changed beyond recognition involves some degree of regret and mourning; it’s only frictionless when your roots were so shallow you could pull up and roll away like tumbleweed and didn’t invest enough to care.

For interest, toss in a little of that “integrity sacrifice” for your Ecclesiologist: what changes and compromises does he have to undertake along the way? Or what refusals, what privileging his integrity over the benefit of forming these intentional communities? What about when the idea gets big enough to take off without him and starts changing in subtle and not so subtle ways?

As an outsider looking at megachurches and how they bloom, blossom, and bust (Mark Driscoll and Mars Hill appears to be the current example of ‘pretty much self-declared pope of his own foundation finally gets turfed out’), these kinds of things fascinate me.

And of course, America has plenty of examples of people trying to incarnate their vision of the perfect society, from the Pilgrims on down 🙂

One goal of writing the book is to show that it can, in fact, be done, show what rules I think are necessary for it to be done without falling into the failure modes I know of, and inspire people to actually do it.

Presumably other people who are smarter than you have attempted to identify failure modes and not succeeded, right?

I’m not trying to convince you to abandon the project. But if you haven’t entertained in detail the idea that you should quit right now, then you’re probably not putting enough work into anticipating potential failures.

I sort of feel like the only political systems that ever even make it into existence are those that fail extremely badly in one or two ways only. It might be a matter of choosing one’s least unattractive weaknesses rather than trying to manage all the necessary details of a society that has none.

* What keeps these groups from growing beyound the initial limit ? I understand that the initial members are “well-selected”, but no one is 100% accurate at selecting people, not even an AI.

* What keeps the members interested in attending their group ?

* You say that exit from a group is “frictionless”, but what does this mean ? For example, I could exit the SSC group at any time, but I don’t want to. If Scott banned me, I’d be sad. How frictionless is this process ?

* What keeps these mini-societies “good”, as compared to other cults, which are presumably “bad” ?

* Are there any checks and balances on the Ecclesiologist’s power, or is he basically a god-king at this point ?

* Regarding all of your scpeculative advances (perfecting education, defeating akrasia, etc.), it would be nice to imagine what that kind of a world would be like, but a). the Strukatsky brothers have already done this better (albeit in Russian, so that may not be helpful to you), and b). I don’t see how this is any more realistic than imagining a world with fairies and dragons in it. Probably less realistic, now that I think about it.

This sounds really interesting. The only advice that I will give is to make sure that its editing process is thorough, because while it is possible to write a novel in a month it is not possible to write a novel that you will not look back on and wish to God that you could change.

>The first episode/part is the story of how one person (who I’m calling the Ecclesiologist, for now) founds a network of autonomous parish-sized groups (whose size is restricted to between four and one hundred and fifty people) by pitching this idea to people on the internet

How very meta.

Anyway, what’s the relationship between these “autonomous” groups? Because if they start evolving to better compete for members, bam, Moloch eats you. If on the other hand they don’t, then you’re building a massive back-door into the system. (Incidentally, what you’re describing is founding a religion; were you aware of that?)

It sounds like you want your good-groups to be powerful. That is, you want them to be able to change things, many of which are overtly political and almost all of which sit in a political context, at least.

OK, how will they prevent entryism? How will they prevent status competition among the members based on holier-than-thou?

Take for example, “changing education by incorporating what we know about what works and what doesn’t”. Don’t you think many local teachers will become very interested in what the local good-group has to say about teaching? Don’t you think they will join? Don’t you think that when they join, the good-group will start to hear arguments and come to believe that we should Pay Teachers More?

Worldbuilding wise: It is probably worth looking into the history of communes and intentional communities and seeing where they went wrong. I suspect a lot started with good intentions but an overly idealistic veiw of human nature.

Story wise, whats the main conflict? There’s a risk of this becoming a mary suetopia taking over the world if handled badly.

how sports are treated very differently, are themselves very different from the sports today, and have a very different culture surrounding them and doesn’t violate any assumptions about human nature seem to be mutually incompatible.

The only places where sports aren’t a communal expression of tribalism (with somewhat artificial boundaries between tribes) are those where something else has replaced sports. In most cases, the replacement is more violent.

Sounds really interesting! I’d echo what others mentioned about studying real-world examples in detail. Apart from what has been mentioned, Fourier, Owen, and those type of people sound a bit like your project.

Quick questions:

* Is there anywhere where you have defined the goals or the “good” that you describe?
* Do you intend to write more details about your project before the project itself, for example as a post on your blog designed to get feedback? I think myself and others would like to read more about your project!
* Do these communities engage in economic activity, or are they an outside-work-type-thing?
* Have you looked into cooperatives at all? The less-political end of the coop movement is quite interesting and unlike most other “communities of good”, they’re alive and kicking (though its a pretty small sector). I think they’re a little more practical (realistic?) than some utopian movements tend to be.

Feel free to take a look at my blog (and get in contact if needed) as some ideas there are semi-connected (probably the economy part plus the philosophy of altruism series. Also the social science section might have something useful for the “doesn’t conflict with human nature” part of your project).

On the gay marriage question, I think you should word it as “legally recognize” rather than “legal”. While there isn’t much confusion on the SSM issue, when it comes to polygamy, there is significant equivocation, so I think that it’s good to use precise language even when the meaning can survive imprecision.

It’s a paper published in the Japanese Journal of Applied Physics (unfortunately paywalled). Obviously not the most prestigious journal, but it’s peer reviewed and they find clear evidence of low-energy nuclear transmutation of elements. I don’t know what to think about the whole cold fusion thing, but this seems like the strongest evidence out there. Given that you agree that the recent Rossi report pushes almost all of the probability mass to either “real effect” or “active fraud,” this seems like a relevant paper (I would tend to doubt that a Japanese university and a Toyota research division are engaging in fraud).

There has been transmutation of elements for 90% of the history of cold fusion. That does seem like it offers that dichotomy, but most people just pretend those papers don’t exist. At the very beginning, one person did declare fraud, namely Gary Taubes (name ring a bell?).

Under closer inspection, there’s very little to go off in the paper. There’s very sparse error analysis, no error bars for the datapoints that are shown, and essentially no experimental methods discussion.

If I were designing this experiment to best draw out such an effect, this is NOT how I would have done it.

They flowed the Deuterium past the metal at only 9 atmospheres, for 135 hours at a high flow rate, or 474 hours at a low flow rate. This part simply makes no sense. If you think for whatever reason rate is the important factor, you need to make rate a pure measured variable – use a fixed time.

The result is, they see more ‘transmutation’ for the sample that had a higher rate but was exposed to the D for less time and had less D pass over it. This is massively counterintuitive, bordering on BS right there given no other knowledge of the experiment. Given an irreversible reaction, longer time and more reagents should mean more reaction, not less.

What I’d do: Make a high-pressure vessel, load it with a couple thousand atmospheres of D, not just 9. Let it sit there for a few weeks or months. That should drive the effect strength up through the roof.

That is of course what I would do (either fixed pressure of D or fixed total amount of D flowed, probably the former), but they couched their results in terms of rate. Probably because their result made NO FREAKING SENSE in terms of quantity of Deuterium or duration of exposure, so they hung their hat where it would be least obvious they couldn’t.

Even if they insisted on that weird metric, they could at least have held the confounding variables fixed, which they didn’t.

I decided to go read the paper. The data is pretty sanitized – I’d have liked to see some raw-er mass spectrometry data that they’re getting this from. Their silly multilayer structure makes me take the paper less seriously (I’m sure they have some reason for it, but that doesn’t mean they have a good reason), and just seems to introduce more difficulty.

Really, if I had to bet, they tried this many more times than got published (did you notice that there’s no deuterium-saturated sample with Cs implantation but no multilayer structure?) There’s no shame in that, it’s basically standard procedure – small-scale physics is hard, a lot of the time your sample is bad or contaminated or something just fails for mysterious reasons. But for this to be okay you have to have strong ways of showing the audience that your good data was in fact good, and not just more problem samples, which they don’t.

Their theory is that the multi-layer structure is an active participant in causing the fusion, so nothing should happen if it’s absent. I agree it would be a very very good check for them to do – if they see it anyway…

I propose Based Libertarianism. When people are down on their luck, they’ll look to the heavens and whisper “Thank You Based Libertarianism“. While not exactly “cellar door“, it rolls off the tongue easily enough. And I think it has the right connotations such as: Moloch’s nemesis(?); basic income; based on the market; based on common sense; basically awesome; and other puns.

P.S. Scott, when do I get an avatar? Do I have to give you something other than my email?

Also, I took an embarrassingly long time to figure out what “OT8” was supposed to stand for. DuckDuckGo seemed to think it had something to do with Scientology.

You should step back and try to understand if other people have substantive points, rather than solipsistically assuming that they everyone else is frivolous and you are the sole serious person in the world.

Hahahaha I saw your comment without realizing it was addressed to me and thought, “Damn, that’s something I need to work on more.” Then I saw it was addressed to me. Sooooo, you’re probably right on that point, and I’ll work on it.

But seriously, I maintain my point that the causal substance beneath our object-level socio-political discourse is increasing friendliness and cooperation, in that these things improve our capacity to do work, and that improving our capacity to do work dissipates entropy more effectively.

What important insight is “being pedantic about friendliness in political discourse” missing? Is your problem with universalistic friendliness, or with the word “pedantic”? What, in particular, is wrong here?

Umfortunately, I’m having a little trouble understanding your critique, since it seems that in this subthread it was basically just “Step back and notice the big picture you’re missing” to two different commenters. Care to elaborate?

I don’t care about the word pedantic. That Jai is using it about himself is a warning accessible to himself. If Jai did not use the word, he’d still be wrong, but I might have written him off as a lost cause.

If you cooperate in a Prisoner’s Dilemma, knowing that your opponent is going to defect, you are not increasing your ability to do work; quite the opposite.

Jared said something specific. Specifics matter. What he said is explained elsewhere on the thread. I didn’t just say BIG PICTURE. The second time I said: try to understand other people, but your response was all about cooperating in the abstract without any mention of Jared or the people you want him to “cooperate” with.

Nah. what it really seems to mean, at least in Britain, is “person who is to the right of me”: the blairites think the tories are neoliberal; the socialists/green party supporters think the blairites are neoliberal; and the hardcore communist radicals think pretty much everyone is neoliberal. Maybe it used to have an actual meaning, but it doesn’t seem to anymore.

Back when I worked for an economic publication in China, I went out for lunch with a new editor from the UK who spoke about a mile a minute, and kept on going on about “neoliberals”. I recall that I responded that he was using the term in a purely pejorative fashion that had no descriptive value, and he responded “how else should I use it than pejoratively? I can’t see anything positive in them.”

I replied that it was best to take people at their word and use the words they use to describe themselves (in Chinese philosophy, we call that “rectification of names”). He found that hopelessly naive.

We never really got along after that, but neoliberal is a term that I’ve generally tried to avoid ever since.

That is: people who are ok with a welfare state, against overt sexism and racism, but don’t like far-left extremism, and don’t like debate being shut down whenever the far left calls something “offensive”. Steven Pinker is a good example; so is Scott himself. So are Sam Harris and Richard Dawkins.

Conservative political parties aren’t actually a good home for these people; they usually would be comfortable with center-left technocratic policy; but they get lumped in with reactionaries because they disagree with SJ writers.

The word for “center-left but not radical-left” used to be, simply, “liberal”. Liberal has become ridiculously overloaded, but I think now “progressive” is becoming sufficiently popular that “liberal” will soon be freed up to mean “specifically center-left” again.

If Richard Dawkins is being trumpeted as an example of what is meant by “liberal but not SJW”, then I am exactly the opposite of whatever he is supposed to represent, even if that definition doesn’t line up with my actual politics.

This bias and prejudice brought to you courtesy of my still incandescent anger over his bloody ignorant mischaracterisation of the turmoil in the northern part of my island as simply being “Protestant versus Catholic, another example of how silly religion makes people”. I have an entire rant about this, but I don’t think hijacking Scott’s blog is the place for it. Just put it down to “English people talking out of their arses about Ireland and the Irish has a long history”.

It sounds like you are giving in to this idea that all political opinions fall on a one dimensional axis. Liberals will interpret this as “I agree with you on everything, but am not brave enough to go all the way like you do.”

Further, if I were to try to push all political opinions onto one dimension, the unnamed group is slightly outside of the liberal cluster, but it is in no way clear to me if they would be closer to the center, or further away.

“Libertarian, but ok with the welfare state” is self-contradictory. We’ve got lots of terms for people ok with the welfare state; libertarian isn’t one of them.

Then what would you call someone who supports Pigovian taxes and a basic income (or a negative income tax), and takes the standard libertarian position on everything else? Perhaps they shouldn’t be called “libertarian”, but it’d be inaccurate to call them “progressive”, too. There are a few terms competing for this position (“liberaltarian”, “neo-classical liberal”) but none of them sound good.

We’ve got lots of terms for people ok with the welfare state; libertarian isn’t one of them. The whole thing just screams entryism.

I’m afraid I don’t understand who’s entering (entryism-fying? How do you verb ‘entryism’?) what, here. People who support a welfare state ‘entering’ libertarianism? or vice-versa? or something else I’ve missed?

Also, does anyone know how to make the comments system on this blog only send you a notification when someone replies to a comment you’ve posted, rather than notifying you whenever anyone posts a comment anywhere on the thread? I know a lot of people hate Disqus, but it’s usually pretty good at that.

It’s not a contradiction, it’s simply a position that is perhaps less libertarian then what is commonly associated with the term.

You obviously have a spectrum rather than a hard break between what is progressive, to liberal, to libertarian. People may find themselves on the border between liberal and libertarian, but associate more strongly with the latter than the former.

It’s also worth pointing out that people can support a welfare state without appealing to left wing motivations.

For example, there is a potential problem of a highly technical economy is that a large portion of the population is not intelligent enough to perform remunerative work, and never will be. Telling them to get a job is as disingenuous as telling them to get a college degree.

I would consider myself to be libertarian. In an ideal world there would be no need for state-run transfer payments.

And in an ideal world, people would be less morally adverse to the problems of giving people money for nothing.

But since we don’t live in an ideal world, libertarians ought to at least call less extreme leftists out on the welfare issue by proposing changes that achieve the same results but are less damaging to society.

A gauranteed income to replace all forms of welfare and/or paying people for every child they *don’t have* takes the welfare issue away from them.

In what way do you want “something sort of like”? My vocabulary tends to be comfortable with “left libertarian” (focuses on social issues, wrongness of government activity) and “right libertarian” (focuses on fiscal issues, amount of government activity).

In Sarah’s defense, there’s a difference between learning about something for the first time and trying to reinvent the wheel. It’s not so much that the things you’re posting about are obvious as it is that they’re foundational to the field you’re dipping into, and you didn’t do anything to signal that you’re aware that people have been thinking about these questions for a long time and may even have come up with a good idea or two.

“Things that persist, persist; things that don’t, don’t. This tautology underlies every single phenomenon we see around us, from molecules to religions. The purpose of science is simply to discover how and why any given class of pattern manages to persist. Life is best understood as a group of patterns that are able to persist because they spontaneously duplicate themselves and adapt to change… The universe is what is left over when all the non-self-maintaining patterns have faded away.” Steve Grand

Elua is the god of niceness, community and civilization.
Moloch is the god of selfishness and competition.

Elua is the god of cooperation.
Moloch is the god of defection.

Elua is the god of Good.
Moloch is the god of Evil.

More powerful than both of them is Evolene.

If you are a pattern that is good at existing, Evolene will spare you. If you are a pattern that is bad at persisting, Evolene will condemn you.

Patterns that are good at existing, are good at existing. This tautology is the animating principle of Evolene.

The total biomass of ants alone makes up more than half that of all insects and exceeds that of all terrestrial nonhuman vertebrates combined! You worry about cancer as a manifestation of Moloch. But think about how weak cancer is. Moloch tried to throw cancer at multicellularity and failed miserably. Elua laughed at him. Cancer is a nuisance for multicellularity, not an existential threat. Cells do not need top-down coordination. Evolene just favors organisms that are good at quelling intra-organismal fighting. This is not because of *coordination.* It’s because Elua is powerful.

Non-zero-sumness is out there. It is there to harvested. Moloch is not able to destroy cooperation.

——————

In Archipelago and Atomic Communitarianism, you argue that a patchwork of discreet societies is the virtuous way to organize society. Archipelago as utopia. Well, plausibly Archipelago is what Evolene supports as well.

Cancer can kill *individual* organisms, but when different organisms compete for recourses, the ones with effective ways to slow cancer will survive. Moloch may be able to kill individual societies, but when societies are competing, the societies that keep Moloch in check survive.

We shouldn’t just create Archipelago because meta-utopias are morally Good. (They are.) We should create Archipelago to keep Moloch in check.

We might not want to measure “dominance” by biomass. (Though, I’m not sure what other metrics to use. I think it’s an interesting conversation to have.) For example, when rats and dodo birds compete in an ecological niche, rats tend to win. Even if there were less rats by weight than dodo birds in some hypothetical world, the ability that rats have to beat dodo birds in zero-sum competitions for resources is relevant when discussing how dominant rats are.

Good to know. Though, I thought Azathoth was the god of evolution—namely, biological evolution—not god of “persistence.” Evolene is about the persistence of objects and patterns, not just biological or replicating ones.

“If you are a pattern that is good at existing, Evolene will spare you. If you are a pattern that is bad at persisting, Evolene will condemn you.”

…and in the recent Popehat borderline neoreactionary post:

“Gnon has no pity and laughs at your human ideals”

No. If you insist on personifying nature, get this: this person doesn’t judge. Will your genes survive? Your ideas? Sure, for a while. Then they will inevitably die. Could be in twenty years, could be tonight. But there is no condemnation from nature. Nature will never laugh at you. If nature were a person, nature would say “so your thing died out, don’t worry. It’s nothing personal. If it’s any consolation, I’ll die eventually too. Don’t look to me to tell you what your score is when the highscore table shows at the end of the game. How should I know?”

But humans judge. If nature causes my genes and ideas to die out, I bear it no grudge. But if you say my genes and way of life deserves to die out, then you bet I will. You’re not disinterested or impartial. You’re as biased as it is possible to be. There is literally no question you are less qualified to decide objectively. To set yourself as judge over what deserves to exist or not, is the most brazenly hypocritical power grab it is possible to imagine.

This reminds me of those movies containing an Artificial Intelligence which supposedly is without emotions, but in fact is passively agressive, or just plainly aggressive.

It doesn’t teach us much about the AI, but it can teach us something about humans. Some humans believe they are above emotions, when in fact they are aggressive.

If someone wants to keep their debate clear of emotions, they should recognize and name this pattern. (Alternatively, they could admit they cannot have a debate without emotions, and choose some nicer emotions instead.)

Will your genes survive? Your ideas? Sure, for a while. Then they will inevitably die. Could be in twenty years, could be tonight.

Scott made a similar point in his Moloch post. “Gotcha! If you do work, you also die! Everyone dies, unpredictably, at a time not of their own choosing, and all the virtue in the world does not save you… The wages of everything is Death! This is a Communist universe, the amount you work makes no difference to your eventual reward. From each according to his ability, to each Death.” I think this is a very nihilistic approach to life. Sure, in the long run we are all dead, if not from natural causes a few decades of now, then when our empire collapses a few hundred years from now, or when the sun becomes a red giant a few billion years from now, or when the universe runs out of negentropy a few trillion years from now. Does that mean we should refuse medical treatment, neglect to have children, not give a damn if our civilization ends, and forget about FAI? No, of course not.

Yes, that is a similar point. But I’d like to take it past just you, and also to the things you care about, whether it’s your genes, your ideas etc. In Hávamál, Odin famously says:

“Cattle die, kinsmen die,
oneself dies likewise,
One thing I know which does not die,
word of the deeds of a man’s life.”

There are two ways to interpret that, one charitable one less so. The less charitable is that the author speaks of your reputation. If you’re a great guy, you’ll live on in the sagas!

That is false, of course, and that’s where I want to extend the point from what Scott says. Those stories will die too. It might take a thousand years, it might take ten thousand, but they will. A point will come when no one alive cares whether you were an honorable norseman or not.

But there is a more charitable reading – although, I think, not what the Norse pagans actually believed. That is that what never dies, is what you did. However much time passes, even if everyone forgets it, it will still be true that you did those deeds, whatever they were.

You’re right that insisting nothing matters in the end is nihilistic. But I do not advocate it, I only argue it as a consequence of saucing up consequentialism and appeals to nature. I do not support either consequentialism or “darwinistic” appeals to nature if the question is what really matters in life.

I don’t think the idea is that consequentialist or “darwinistic” appeals to nature can convince an ideal philosopher of perfect emptiness to care about prolonging his life, his genes, and his civilization. The point is more that we already care about those things as terminal values, and then pointing out the ways in which our current set of behavior are counter-productive to those goals, and ways in which we can do better. If you don’t share those terminal goals… well, we can always form separate communities and look for positive-sum ways to trade fulfillments of our utility functions, or else engage in politics/war if there is a limited resource we both want, or if one of us is unwilling to let the other exist.

You’ve carved off parts of Gnon you like and called it good. Gnon is neither good nor bad. When the right conditions exist, Gnon rewards coordination, when those conditions change Gnon punishes just as swiftly.

Ant colonies are supposed to be Elua? I’d consider any ant-like society to be a horrifying dystopia. I’m hardly alone; it’s in a fictional ant ant hill that the Totalitarian “Everything not forbidden is compulsory” Principle originated.

But Elua is stronger than Moloch. Ant colonies are a FACTORY within a FORTRESS. It’s hard for your typical beetle to compete with that.

The world continues to be full of both ants and beetles – and of multicellular and unicellular organisms, and so on. All sorts of things are viable at a variety of different scales (until they aren’t); “is Elua stronger than Moloch?” is a confused question.

I read a couple old posts recently, one discussing Elua and one discussing Moloch. Seemed like Elua was defined by her ability to kick Moloch’s ass, but Moloch seems to be the one you’re more afraid of nowadays. Is this a victim of hanging beliefs, or am I missing something?

(I only realized SSC was a thing recently, though I’ve long enjoyed the articles on bravery debates, so I’m archive binging now, and am not yet done)

The “dismal” in “the dismal science” is opposition to slavery. Thomas Carlyle coined the term in his Occasional Discourse on the Negro Question. By “reducing the duty of human governors to that of letting men alone,” economics says that you cannot reasonably enslave people and force them to work against their will. Economics opposed slavery, and Carlyle was arguing for bringing slavery back.

Economics says your money spends just as well as the next guy’s, and your time is your own to do with what you see fit. Laissez faire is more or less the opposite of slavery, so if you support enslaving people, you think economics is dismal.

Laissez faire also let Irish people starve during the Famine, because heaven (or economics) forfend the government interfere with the market or the rights of private property.

Though I think there, Carlyle would have agreed: if the Irish are so worthless they can’t manage to survive, let the weakest go to the wall (he liked the Irish even less than Negroes, if I’m remembering correctly). From his Irish Journey (undertaken in 1849, that is, two years after the official end of the Famine):

Some 5 or 6 Aberdeen and Ulster men; nothing else that one can see of human that has the smallest real promise here; “deluidit craiturs,” lazy, superstitious, poor and hungry. 7/6 no uncommon rent, 30/ about the highest ditto:- listening to Lord George I said and again said, “No hope for the men as masters; their own true station in the universe is servants,” “slaves” if you will; and never can they know a right day till they attain that.”

Though I suppose some would say it would be a good thing had the Irish all died of starvation and disease, and let someone useful come in and do something with the place 🙂

No, that is the opposite of the truth. The Liberal Party was formed specifically to repeal the Corn Laws in response to the Famine. Carlyle was not Laissez Faire. The very comment you are responding to is about how he was opposed to economists!

The Potato Famine was in large part a creation of government intervention. Even in the worst years of the famine, Ireland was a net exporter of food, because absurd laws made it more profitable to ship the food to people who didn’t need it than to sell it to those who did. In any sanely governed system, famine is met with mass importing, because starving people will pay quite a lot for food, but that was impossible in the Ireland of the 1840s.

Ah, how I love strangers telling me about my own history. (I wish there were some emoticon for a razor-edged smile, can anyone tell me if such exists?)

The Liberal Party was formed specifically to repeal the Corn Laws in response to the Famine.

The repeal of the Corn Laws had damn-all to do with the Famine; this was a topic that had been kicking off since the 18th century, was primarily to do with protectionism and the demands of industrialisation versus an agrarian economy where the large landholders were also the governing class, and its eventual success in 1846 (which is smack-bang in the middle of the Famine) really did little or nothing for the irish (see Peels’ Brimstone, where Ameircan maize – corn in the vernacular – was imported as a cheap substitue foodstuff by the government and worked about as well as you’d expect – which was badly):

Meanwhile, Prime Minister Peel came up with his own solution to the food problem. Without informing his own Conservative (Tory) government, he secretly purchased two shipments of inexpensive Indian corn (maize) directly from America to be distributed to the Irish. But problems arose as soon as the maize arrived in Ireland. It needed to be ground into digestible corn meal and there weren’t enough mills available amid a nation of potato farmers. Mills that did process the maize discovered the pebble-like grain had to be ground twice.
To distribute the corn meal, a practical, business-like plan was developed in which the Relief Commission sold the meal at cost to local relief committees which in turn sold it at cost to the Irish at just one penny per pound. But peasants soon ran out of money and most landowners failed to contribute any money to maintain the relief effort.
The corn meal itself also caused problems. Normally, the Irish ate enormous meals of boiled potatoes three times a day. A working man might eat up to fourteen pounds each day. They found Indian corn to be an unsatisfying substitute. Peasants nicknamed the bright yellow substance ‘Peel’s brimstone.’ It was difficult to cook, hard to digest and caused diarrhea. Most of all, it lacked the belly-filling bulk of the potato. It also lacked Vitamin C and resulted in scurvy, a condition previously unknown in Ireland due to the normal consumption of potatoes rich in Vitamin C.
Out of necessity, the Irish grew accustomed to the corn meal. But by June 1846 supplies were exhausted. The Relief Commission estimated that four million Irish would need to be fed during the spring and summer of 1846, since nearly £3 million worth of potatoes had been lost in the first year of the Famine. But Peel had imported only about £100,000 worth of Indian corn from America and Trevelyan made no effort to replenish the limited supply.

Kindly note also the laissez faire response: people are dying of hunger because they’re too poor to buy food. Let’s sell them food (because giving food for nothing is bad for the market) but hey, since it’s a disaster, we’ll be bighearted and make it cheap!

Carlyle was not Laissez Faire

Which is not what I said; I said Carlyle disliked the Irish so much he probably would agreewith laissez faire results of ‘if they can’t pay, let ’em starve’.

Even in the worst years of the famine, Ireland was a net exporter of food

You write “smack-bang in the middle of the Famine” as if that were evidence that the repeal “had damn-all to do with the Famine,” rather than the opposite. Indeed, your own source says that it was a response to the Famine.

I am quite willing to have a fight with anyone, anytime, about Irish history 🙂

You write “smack-bang in the middle of the Famine” as if that were evidence that the repeal “had damn-all to do with the Famine,” rather than the opposite. Indeed, your own source says that it was a response to the Famine.

You, my dear Anonymous, claim that (a) the Liberal Party in England was formed to repeal the Corn Laws in response to the Famine – a claim I am sure would astound the Whigs, who were the forebears of the Liberals, and whose policy towards the Irish was much the same as their policy towards the Scots: the Protestant religion as the guardian of liberty, the House of Hanover as the guardian of the Protestant religion, and power centred in Parliament rather than the person of the king, in opposition to the Tories (forerunners of the Conservatives) who were Royalists for the House of Stuart and Anglicans suspected of tending towards Roman Catholicism; indeed, “Tory” itself ultimately derives from an Irish word or term for “outlaw”.

(b) the question of the repeal of the Corn Laws ran from 1815-1846. Peel may have seized the chance to push forward repeal by using the Famine as a stalking-horse, but I repeat: repeal did little or nothing to assuage the fact that the landless poor were starving and that food was being exported – even under the Liberal government that repealed the Corn Laws. It was not purely in response to the Irish situation that the Corn Laws were repealed; the Anti-Corn Law League had been agitating since 1838 and the poor harvests which led to scarcity in Britain had as much or more to do with it.

Repealing the Corn Laws was sold to the masses in England as “cheap bread”; as you point out, that’s not the question in a famine, where food is at a premium. Indeed, since the population was dependent on the potato as a cheap staple foodstuff, bread – no matter how cheap – would not take up the slack in filling the hole in the diet.

And, as we see with “Peel’s Brimstone”, repealing the laws to permit the import of cheap foreign grain didn’t do much, since the trial attempt was importing maize – a crop completely unknown on these shores, with the concomitant result that no-one knew how to mill it or what to do with the resulting product, and so it failed as the notional ‘cheap substitute for potatoes’.

Ireland was feeding England and producing the revenue that our imperial overlords demanded of us. The forces of the Crown – the army and police – mounted guard and oversaw the exportation of the ‘cash crops’ during this period. Imports of cheap wheat while meat, butter, fish, legumes and cereals were flowing out of the country was not such a marvellous result as you tend to make it sound.

And what was the end result of repeal? By reducing the price of grain, and permitting the importation of cheaper foreign grain, prices were indeed depressed but so were profits. This led to (a) consolidation of land; smaller, less successful farmers failed and their land was taken over and formed into bigger farms owned by one farmer (b) demand for labour was reduced (c) in Ireland, even more than Britain, the agricultural economy shifted from tillage to livestock – cattle and sheep were much more profitable.

So the end result was that the very class – the agricultural labourer, the landless poor – who were supposed to be helped (in your paradigm) by Corn Law repeal were instead driven ever more to emigration by lack of employment and low wages!

Ireland was feeding England and producing the revenue that our imperial overlords demanded of us. The forces of the Crown – the army and police – mounted guard and oversaw the exportation of the ‘cash crops’ during this period.

I won’t argue with you on the facts, but interpreting that sort of policy as “laissez-faire” is just a flat-out lie. And if your teachers told you it was, they were lying to you.

That is LF; preventing crime is the textbook minimal government activity.

The problem is Ireland had an existing exploitive social setup and combining LF with that is a bad plan. It back fired in France in the 1700s when they attempted to rationalize the grain market so the British really should have known better.

LF requires developed capitalism to function. Without low transaction costs, savings buffers and diversification you can have an entire economy grind to a halt by repeated bad harvests. You need something to inject funds into the economy (work projects, charity) because no body has products to sell and trying to convert their possessions into cash results in a glut in the market so the poor rapidly run out of money.

That is part of the reason neoliberalism didn’t do so well in the 1990s; I remember a book talking about how people in Russia could either pay for things or bribe the police and threaten the owners and when the latter was cheaper… you get the idea. Combined with the issue of unclear property rights encouraging people to loot property of assets instead of using it productively you get a bunch of criminals and oligarchs looting the country.

Peel was a Tory, but one regarded with suspicion by his own party, which is why he needed the support of the nascent Liberals and the Radicals to pass his legislation.

Laissez faire, as interpreted by the Manchester School, said you shouldn’t interfere with the market, no matter how good your intentions. Allowing cash crops to be exported so revenues could be raised and taxes paid was all part of the unfettered market, or am I somehow lying about that as well? It just so happened that Ireland being an agrarian economy (and that wasn’t an accident either; incipient Irish trade had been done down precisely through English protectionism so as not to be a threat to the ‘mother country’), our cash crops were in the main foodstuffs (not completely and solely, but largely).

MR. OSBORNE expressed his fear that if Government was to provide food gratuitously for the distressed poor of Ireland, it would, create a system of eleemosynary relief, which ought by all means to be avoided. He could conceive of nothing so mischievous as leading the people to look up to Government permanently for relief and food. He would rather call upon Government to provide some regular employment for them. It had been anticipated by many that a great deal of employment would be created before this time, by the Railway Bills that were passed last Session; but he believed it was a fact, that none of these companies had as yet put a spade into the ground, in consequence of the want of money. He would put the thing in a more tangible shape, and would call on the Government to make grants of loans of money to these railway companies; by which means they would be able to proceed with their works, and to give employment to the people. Of what benefit would it be to send food into the country, if the people had not wherewithal to buy it? He did not know whether he had properly understood the proposition of the hon. Member for Finsbury; but certainly nothing-would be so injurious to the people as to establish a system of eleemosynary relief.

But I realise I have committed the capital and most mortal of sins where Americans are concerned; I have spoken heresy of the Divine and Immaculate system of Free Market Capitalism.

I’ve gotten into trouble over this before, since I do not regard a human-devised economic system as some natural and fundamental underlying principle of the universe without flaw or spot as long as it is properly followed (the examples I adduced in my last row on the topic were all dismissed, one after another, with “That’s not proper capitalism”).

Since Elua, as far as I can tell, is the personification of “human values”, Elua demands merely that we follow our values. Of course, the problem is that our values may not be consistent.

(Also, I really dislike the use of Elua. Moloch is okay, but Elua annoys me. He got used as a metaphor already, in Niceness, Community, and Civilization. It confuses Scott’s ideology with universal human values. I like Scott’s ideology, but it’s the same tactic Scott notices here.)

Elua is universal (or close to universal, anyway) values. Which are obviously not perfect either, since hating the outgroup is probably as close to a universal value as you can get.

“How do you know?” Because anthropomorphic personifications are a bad way of describing this, unless one codifies the whole thing into a religion. Elua is what you define Elua to be. No other knowledge is required.

I enjoy the mythology some posters present that Elua isn’t strong enough yet. So we have to placate Moloch while we help Elua grow strong. Attempts to destroy Moloch before Elua is strong enough (see Communism) will lead to disaster.

(Though my main objection to our host’s entire conception of Elua is that human values are, if taken independently of Gnon, pure wireheading. The things we value are only epiphenomena of evolutionary drives, and a paradise of human values will ultimately be a fool’s paradise if it’s not driven towards the ends that these values existed for to begin with.

That, and the entire idea of Moloch as “what you sacrifice your highest values to in order to gain victory” is self-contradictory, as if you sacrifice them, they’re no longer your highest values. They take a backseat to another good which you value more. Otherwise you wouldn’t care enough to live without them.)

That, and the entire idea of Moloch as “what you sacrifice your highest values to in order to gain victory” is self-contradictory, as if you sacrifice them, they’re no longer your highest values.

Moloch is what some other group sacrifices your highest values to in order to gain victory over you.

When American revolutionaries use guerrilla tactics, they have sacrificed the value of honor in war to gain victory over the English. The revolutionaries _were_ English, but now they’re English – honor. Moloch rewards the (splinter) groups who are willing to sacrifice the (original) group’s highest values.

Moloch is always there, willing to trade you victory for a piece of your soul.

My personal interpretation was that Elua is a latent god. Moloch is to be feared, because we cannot trust Elua to save us. Eventually, we can bring forth the Invisible Nation and drive off Moloch for a while. It actually sounds a lot like Quakerism, although that’s probably just my personal biases speaking.

The optimal amount of alcohol to consume for longevity seems to be a function of gender, age, ethnicity, weight, and probably a bunch of other things. Can anyone point me to a good (as in something source that will help me figure out my optimal alcohol consumption (if any) when given these variables as inputs?

2. Do you have alcoholism in your family? If so, be careful. (And if you’re an addictive personality, be careful.)

3. Otherwise: About two glasses of red wine a night. Red wine is better than other things. (Port is fine.) If you are less than 120 pounds, one glass is probably plenty.

The alcohol studies tend to have confounding factors. (Wine drinking has serious confounding factors.) I think trying to fit in the bunch of other things is mostly a bad idea; the evidence just isn’t robust enough.

This[0] contains a very brief overview of what’s been found through epidemiological studies; I spent a few hours a couple of months ago doing a breadth-first traversal of its bibliography. The sweet spot seems to be around 5-15g ethanol daily, depending mostly on sex.

It’s worth keeping in mind that while moderate ethanol consumption decreases all-cause mortality, it increases mortality from several specific causes, most notably the majority of cancers; and that its decrease in all-cause mortality can primarily be attributed to reduction in mortality from cardiovascular disease.

And as JRM says, I wouldn’t try this if your family has a history of alcoholism, or if you have a history of addiction. While most of the abovementioned article is sane, I think it does downplay the addiction risk a little too much.

The only source I can find right now is a cracked article with dead links, but supposedly one possible explanation for the longevity of mild to moderate drinkers is that they are more likely to go out with friends. So (allegedly) the real cause is the magic of friendship.

I think an interesting approach then would be to compare life expectancy in the general population to life expectancy among Mormons, Muslims, and any other cohesive religious communities we can get data on that have a widely-followed prohibition on alcohol use (i.e. non-drinkers who avoid the ‘most non-drinkers are confounders because they are likely to have health problems that mean they can’t drink’ problem, and then try to factor out how much ‘spending time with friends’ serves to increase longevity among such groups.

You could probably at least mitigate this effect by comparing, e.g. Protestant denominations with a prohibition against alcohol to Protestant denominations without one. I’m not sure which ones would be the right ones to study to get people who are otherwise as comparable as possible, though.

I don’t recall sources, but IIRC when I investigated, the optimum alcohol consumption in the most extreme scenario (older sedentary male) was 1.5 drinks (around 30ml) per day, preferably red wine. In the best case scenario (anyone getting enough exercise) the optimum was 0 drinks per day. My impression was that if you don’t clog up your arteries there’s no need to pour in any Drano.

Those claims are way more detailed than anyone else’s. You claim to have a fairly detailed understanding of why alcohol helps, while most people aren’t entirely convinced that it does help. This should alert you that probably you are either in possession of useful information, or confused.

I know it is too late now, but I would be interested in seeing the mental disorders question expanded. I get the impression that the rationalist community has a higher rate of these, but I’d be interested in some hard numbers. Maybe next year.

Five dollars in the pocket of someone who’s otherwise broke is a bigger boon to quality of life than that same five dollars in the pocket of a multimillionaire. If you have a set number of resources, the optimal way to distribute them is to give equal amounts to everyone. So how much you care about slicing the pie equally versus getting a bigger pie probably depends on whether you think it’s easier to make the pie bigger without changing the portions, or make the pie more equal without making it smaller.

I agree with this except in some VERY pathological cases. Such as there are two people and only enough resources for one person to survive if they have all of the resources. Then you are better off giving the resources to just one of them.

But in general I agree the logical conclusion of utilitarianism + mild assumptions on human nature is that equal distribution is ideal given fixed resources.

That’s reason number one. A possible further reason is that relative wealth influences happiness via self-perceived status. If this effect is negative-sum as disparity increases, it could imply that even financial Pareto improvements reduce utility of they’re too unequal (whereas the first reason cannot be an argument against a Pareto improvement). Obviously this is much more speculative.

Here I’m talking about TOO MUCH inequality, like the .01% holding over 50% of the wealth. I’m all for state lotteries that give some random cigarette buyer several million dollars to play with. It’s good that some people and some corporations have plenty of money to experiment with, and some children can get super-excellent education and backing to use it, etc.

But wealth snowballs: the more you have, the more you can get. A little too much inequality can snowball geometrically into a whole lot too much, especially if its expansion is pushing other people down the ladder. Say, Alice owns a factory where Bob has a job with health benefits, retirement, etc, and is buying his own home with a mortgage he can comfortably meet. If for whatever reason, Bob gets laid off, he loses his house, goes back to the factory as a part time independent contractor with no benefits; Alice gets a cheaper worker, buys Bob’s foreclosed house, and rents it back to him. Alice can also buy politicians and capture regulatory agencies as needed.

This is not to say that Alice Rearden would necessarily behave this way. But too much inequality makes it easier for this sort of thing to happen, even unintentionally. Which makes the many middle class people sad without making the few very rich any happier. A bigger pie doesn’t help much, because no matter how comfortable the house is, Bob will not be happy trying to balance his low insecure wages and his rental payment.

Inequality of wealth leads directly to inequality of power, and when Alice is significantly more powerful than Bob, that means she has the ability to hurt him and get away with it. Historically, in the absence of accountability, moral conscience does not consistently restrain Alices from abusing their power, so Bobs are empirically justified in wanting to avoid or escape the situation.

Isn’t this a problem that institutions, good governance, and the rule of law are supposed to deal with? If you have a society where the powerful abuse the weak, trying to eliminate all power is unlikely to be a feasible solution.

(Wow, the socialists fall into the same trap as the libertarians! That’s interesting.)

If we want to talk about what happens empirically, talk about seizing and redistributing the wealth of the rich should be laughed right out of the comments section.

Somehow there are places in this world with better and worse governance, better and worse institutions, greater or lesser adherence to the rule of law. Nominally Communist places do not seem to be among the better, despite their valiant efforts to stamp out the scourge of power.

We’re talking inequality of wealth – as the experience of Sweden shows us, ongoing tax-and-spend does not actually do much to defeat that. You need to seize and redistribute the wealth, which is what in practice does not work.

Well yes, it is a problem that institutions, good governance and the rule of law are supposed to deal with. But how they deal with it by equalizing power. You do not eliminate power, you just try to ensure that it’s equally distributed, or at least that the weak have it when they deserve it (e.g. recourse to the courts).

I am of the opinion that it’s impossible to give a minority power when, and only when, they deserve it. Our society is bad at making rocks so heavy we cannot lift them; a constitution that protects the weak in specified situations is only as good as people’s willingness to follow it. In practice, either the strong find ways around laws designed to protect the weak, or minorities find ways to overrule/dominate majorities when it isn’t necessary to protect them.

Thus, equality of political power is paramount. In the last instance, you must be empowered to look after your own interests, you can’t count on a wise and just ruler, or pretty and high minded phrases in a constitution to protect them.

We can tolerate inequalities in other kinds of power (e.g. the power to buy swimming pools, social status of your caste etc.), to the degree it can’t be converted into political power. But social and economic power can all too easily be converted into political power.

It seems likely to me that too much wealth concentration hurts the economy. Poorer people spend a larger fraction of their income and they spend more of it on local goods and services. Richer people spend less of their income and less on tangible goods and services that actually employ people (a rich family with 1,000 times more wealth than a poor family doesn’t buy 1,000 times more cars, food, etc.). The money the rich didn’t spend on stuff is invested, but I think a lot of the places those investments go like offshore tax havens, government bonds, and speculative markets have questionable benefits for the economy at best.

Inequality in the present context means more poor people in worse conditions, including a growing class of people who are not just unemployed, but unemployable. A growing police state becomes necessary to maintain order, and even then there’s still the risk of massive civil revolt. I think pretty much all of that is bad for reasons ranging from simply caring about those people’s lives to not wanting society to collapse or suffer any major setback.

It seems likely to me that too much wealth concentration hurts the economy. Poorer people spend a larger fraction of their income and they spend more of it on local goods and services. Richer people spend less of their income and less on tangible goods and services that actually employ people (a rich family with 1,000 times more wealth than a poor family doesn’t buy 1,000 times more cars, food, etc.). The money the rich didn’t spend on stuff is invested, but I think a lot of the places those investments go like offshore tax havens, government bonds, and speculative markets have questionable benefits for the economy at best.

This is an incoherent combination of pop-Keynesianism (itself incoherent) and other stuff that doesn’t actually compute.

1. Labor is employed in the creation of newly-produced goods or services, including capital goods

2. What the heck do you think “offshore tax havens” are, magical money trees? Money invested through them is equally “invested” with money not in said tax havens; the advantage is differential tax treatment, not some magical power to make money out of thin air.

3. Speaking of which, there is only one investment that is capable of such: the aforementioned government bonds. Yet you are skeptical of the value of those bonds. Do you think it would be good if the rich people collectively boycotted the government so it couldn’t borrow? Would you support making it illegal for the government to issue debt (which has the identical outcome)?

4. Aside from government debt, when you buy financial assets the money has to end up doing something. Either you do something like buy a bond from a borrower and the borrower then spends the money on stuff – be it wages, consumption goods, capital goods, or services – or you bought an asset from some other asset-holder, at which point that asset-holder now has some money which they probably want to spend on something. (Unless they just really really really like to hold on to dollar bills.)

5. Even if somehow you decide that the rich as a class really do like to hold onto regular dollar bills, the answer to this is… have the central bank print more dollar bills. Eventually the rich’s demand for hoarding dollar bills will be satiated and everyone else will have enough money to get around.

1. Agreed. So include capital goods in the set of “tangible goods and services that actually employ people” that we’re talking about. It still seems plausible to me that, say, a super rich person like Warren Buffet with a net worth around $60 billion has created less jobs than if you were to divide up that money and give a million poor people $60 thousand. IIRC Buffet himself has argued the same thing.

2. What I think offshore tax havens are is offshore. Maybe this wasn’t clear, but I’m talking about what hurts or helps the economy of a country. I don’t think it’s contentious to say that it’s bad for a country’s economy when its wealth is invested in foreign lands rather than domestic.

3. I think in general, borrowing is good when you use the borrowed money to invest in something which gives greater returns than the interest on your borrowing. I doubt this is true of most government debt spending, so yes, I am generally against government debt spending and doubt the economic value of rich people investing in government bonds.

1. “Number of jobs created” is not a logically coherent concept without a specified notion of ceteris paribus, which notion is not likely to obtain in the real world. (The version of it which is likely to obtain in the real world suggests that any individual abstaining from a particular purchase [of capital or consumption goods] has little to no effect on aggregate employment under most conditions, and that this remains true even multiplied across many individuals or many purchases.) The idea you are looking for is “labor demand”, and there aren’t good a priori reasons to be prejudiced between sources of such demand. Capital good demand may flow through primarily to demand for low-skill labor, and consumption good demand may flow through primarily to high-skill labor, or vice versa. Additionally larger stocks of capital goods raise wages via complementarity.

2. Do you actually think all of those Cayman Islands accounts are invested in stuff physically located in the Cayman Islands?

3. You’ve missed the point, which is that income or wealth inequality is not the cause of government borrowing. Government borrowing is to a first approximation exogenous to such. If you want to fix the problem of government borrowing, just… suggest that the government not borrow.

5. The money is printed and buys financial assets, typically government bonds.

I don’t think it’s contentious to say that it’s bad for a country’s economy when its wealth is invested in foreign lands rather than domestic.

It’s not contentious, it’s dumb. If residents of a nation invest their money abroad, it’s because they think they can get a better return abroad than at home. This can be either because home is economically stifled, in which case the offshore investment is a symptom of bad economic policy, or because the home economy is more “mature”, and a country or group of countries is experiencing rapid growth (China from about 1991 – 2011, for example) which is unlikely to be duplicated at home, or because the investor has some site-specific knowledge (or contact) with another country which allows him to make better-targeted investments abroad than he might at home.

None of these is intrinsically harmful. The actual thing that leftists complain about is that the money stays offshore, but even that isn’t actually a harm; it’s an expression of American imperialism, expressing the right of the American state to tax anything anywhere.

1. Or it means he made foreign investments and employed people somewhere else. Or it means he invested in intangibles like government bonds with questionable economic results. Or it means he spent more money on heavily marked up luxury goods or other status signals. (I first thought of positional goods rather than luxury and status signalling, but in the former case the seller could go on to use the money more productively.)

2. That might be a good point, but is the state of, say, American foreign investment comparable in terms of wealth returning to the domestic economy?

@Alex Godofsky

1. “The version of it which is likely to obtain in the real world suggests that any individual abstaining from a particular purchase [of capital or consumption goods] has little to no effect on aggregate employment under most conditions, and that this remains true even multiplied across many individuals or many purchases.”

That depends on how many individuals/purchases we’re talking about; many individuals, for a large enough definition of ‘many’, is the entire economy. I understand that if we look at America and consider “the rich” to be the 0.1% of people with assets in excess of $20 million, we’re talking about something like 20% of the nation’s wealth. Would you seriously suggest that the holders of 20% of the wealth have little to no effect on aggregate employment?

‘The idea you are looking for is “labor demand”, and there aren’t good a priori reasons to be prejudiced between sources of such demand …’

My reasoning is that as a first approximation, the health and value of an economy is going to come from how many goods and services are produced by it. When poorer people have money they tend to spend more of it on goods and services like cars, food, medical care, etc. This supports jobs and infrastructure in those industries and thus improves production of those things. When richer people have money they tend to spend more of it on a whole host of mostly abstract stuff like lobbying and legal battles with other rich people; funding dart-throwing financiers and paying programmers to compete with other programmers to see whose systems can micro-transact more money out of the digitized stock market; lending the government money, much of which will end up in economic black holes; etc. A lot of stuff likes these don’t seem to have much connection to actually producing goods and services.

Though thinking about it I suppose the lawyers, financial people, etc. are providing some kind of service, it’s just a service that largely exists to combat itself, pulling money away from the rest of the economy which provides the goods and services that are intrinsically valuable. I’ll relinquish these examples until I think more about whether there is an a priori reason to disfavour such things.

2. No, but unless you’re saying you think that offshore accounts don’t invest significant amounts anywhere other than the investor’s home country, I think my point stands.

3. I agree with that point. Never said otherwise.

5. In any case, I don’t disagree that money generally ends up doing something. You don’t really see rich people storing millions of dollars under their pillows.

@Anthony

“It’s not contentious, it’s dumb …”

These all look like arguments for why domestic investors might want to make foreign investments. But I didn’t say that foreign investment is bad for domestic investors, I said that foreign investment is bad for domestic economies.

“None of these is intrinsically harmful …”

Even granted perfect taxation of foreign investments, is it not intrinsically beneficial to an economy to have those investments and the increases in jobs, infrastructure, and other things of economic value that might result from them happen locally rather than somewhere else?

And yet concentration of wealth is one of the ways in which people in society cooperate. Think of every large-scale object or capability in the modern world. They all exist because wealth was sufficiently concentrated to build them. Cell phone networks are huge investments that are only possible because cell phone companies have tons of money, and so the people who own those companies have tons of money. Same with large commercial airplanes. The only reason Elon Musk was able to attempt the things he’s attempting with Tesla and SpaceX is because he was very rich and he could get money from other very rich people. Same with railroad and telegraph networks, and before them sail-based global trading networks and empire-wide road systems. The Roman Emperor personally owned half the wealth in the empire (caveat: their conception of government seems to include that political leaders spend their personal wealth for the good of the empire). Even further back wealth was the ability to force people to do work, which is how early humans built the irrigation and other agricultural works that allowed civilization to flourish.

I’m not trying to make the specific argument that rich people support certain luxury industries that otherwise wouldn’t exist if wealth were more evenly distributed. I mean every significant human endeavor that benefits a large number of people requires wealth concentration.

That’s not to say that wealth concentration causes problems. My informal reading suggests that it absolutely does. And wealth concentration also allows the wealthy to create armies that they can use to kill lots of other people. But Moloch is everywhere and ruins everything; there’s just not much we can do about it.

(Now that I’ve written this post I may just be repeating Scott’s earlier Moloch post. But people railing against wealth concentration without considering the benefits of it has been bugging me for a while.)

I really don’t disagree with what you’re saying. The concentration of private wealth can fund great human works, and the ability to accumulate private wealth is a great motivator for economic activity. But what is good to some degree or even in general may not be good in excess.

This book argues that inequality leads directly to higher rates of just about every prominent form of other societal ills. I’m not sure how strong their case is, but to the degree that they’ve got their numbers right, and their causality arrows pointed in the correct direction, then the reasons to care are myriad.

Tino Sanandaji made a good debunking of this book some years ago (bonus point: Wilkinson, one of the authors, responded). In synthesis, Tino argued that the book only correlated some stuff with other stuff and implied it revealed causation. Tino also did a regression of life expectancy and per capita GDP for all countries the UN had data for and found… Nothing (the correlation wasn’t statistically signficant). Regressing inequality using GINI (instead of the 20/20 richest and poorest ratio used in the book) and life expectancy for the OECD countries revealed the correlation as not statistically significant, but positively (!) correlated with life expectancy.

I don’t think the author of a post called “The Control Group is Out of Control” (i.e., our eminent host) would like very much the methodology used by authors Richard Wilkinson and Kate Pickett (Piketty-like in her hate of inequality!***) in The Spirit Level.

“As you notice, we have another variable that is not only not statistically significant (p value 28.6%), but that goes in the opposite direction of what The Spirit Level claims: according to WHO data more unequal countries have less mental illness!”

In that review, Scott noted the heavy use of correlations to imply causation. I think he would not “find Spirit Level’s statistics slightly more convincing than its critics'” if he knew that many of these correlations “disappear” (become statistically irrelevant) by simply using the standard measure of inequality (i.e., the GINI coefficient) instead of the 20:20 ratio prefered by Wilkinson and Pickett.

If you want a well-thought-out, logical attack on inequality, I agree with the reasons other people have given, particularly NonsignificantName’s. But if you want my actual personal answer to “If the answer seems terribly obvious to you, what is that obvious answer?” it’s this: it’s ugly. I have an gut aesthetic reaction against inequality. Always have. I perfected it (and graduated into a person who agrees with, if not necessarily practices, the illiberal left) after a few years of working on the streets of DC (not as like a social worker or anything, but literally on the streets), where homelessness is prevalent in stark contrast to both the “K-Street” new money and the neoclassical “old-money.” It really was disgusting to me, and still is.

This is interesting, because I have almost the opposite reaction. I don’t find anything particularly upsetting about inequality, even extreme inequality. On the other hand, I find hierarchy to be natural and aesthetically pleasing. This doesn’t prevent me from pitying those on the bottom, but my criterion of social utility is whether those on the bottom are fed, clothed, and otherwise delivered from needless suffering, not what the delta is between the bottom and the top.

This basic difference in aesthetic taste is probably innate and incommunicable, which is why politics can never succeed.

I cannot but notice you putting “fed, clothed” but not “housed”, in a response to someone who had been working with the homeless.

This is probably because feeding and clothing are not redistribution issues in the rich world – we have wildly more than enough food and clothes, if I give someone a sandwich my wealth is not significantly impaired.

Housing is a redistribution issue, people believe that the value of their house is a significant fraction of their wealth and that this value can be adversely affected by the kind of activities that let people at the bottom be housed.

In the worst cases people behave so as to demonstrate that they are more attached to the fallow field behind their house than to the prospect of taking three hundred people from ineptly-managed hostel accommodation.

To be clear, I did not work with the homeless. I was a tour guide who mostly did sidewalk tours, so I met and interacted with homeless folks routinely, even got to know a few fairly well. I tried my best to treat them as human beings, but I wouldn’t want to give the impression I was there for altruistic reasons.

Like most things, inequality is more than one thing. The rigid, caste based kind, that denies opportunities based on gender and class, is inefficient. Who knows how many potential Einsteins spent their lives herding goats? And that’s just what NRxs, as opposed to mainstream conservatives, want to bring back.

Really? I may have got a skewed sample by only having read Moldbug and Nick Land, but I though the point was they wanted small city states ruled by a (hopefully) benevolent monarch who is forced to maximise GDP/”utility”/some economic measure because of competition with other such city states. In such a situation, it seems unlikely that anyone would want Einsteins herding goats.

Moldbug does talk about castes like brahmins, vaisyas, dalits etc but I thought that was meant to be purely descriptive, not prescriptive.

Even in India, the historical caste system was less rigid than is commonly believed, and much of the caste system that we know today is an artifact of the British experience which imposed legalistic structures upon caste that were foreign to the historic practices which was much more fluid.

I don’t think anybody wants Einsteins herding goats, though I think most NRx want us to give up the illusion that we can turn all the goatherds into Einsteins as well.

Only reproductively. There was always an escape hatch- lower caste men could aspire to a higher level through brahmacharya (aka, celibacy). Which wasn’t terribly different than pre-enlightenment, post-Roman Europe, where you were pretty much stuck unless you joined the religious hierarchy.

Interestingly, the sources we have (there aren’t many) suggest that early medieval European social standing may have been more economic than caste-based. The Geþyncðo, an Anglo-Saxon English treatise written around 1000 AD, outlines how social mobility might have worked: loosely, if you weren’t of noble birth but had substantial personal holdings and an appropriate level of responsibilities, then you were entitled to aristocratic rank. This was probably more descriptive than prescriptive, though.

City states ruled by (hopefully) benevolent monarchs sounds harmless, especially if everyone is free to leave…

…until you realize that children will be born into them. And there is an ethical imperative for the rest of the world to save children from harm or suffering independent of what local governments or parents want.

Figuring out a good general approach to managing children — or, more generally, societies containing people of widely varying experience, mental capacity, and dependence on others — is a hard problem, and a purely laissez-faire approach is unsatisfactory to me for a number of reasons. But I don’t find it satisfactory to allocate unbounded responsibility for care to adults/the more capable and unbounded rights to care to children/the less capable, either, and your proposed system of imperatives seems to reduce to that in practice.

When I’ve thought about ways of resolving this in the past, my ideas have usually revolved around freedom of information and movement, but I’ll be the first to admit that there are probably holes in that.

Nornagest, you’re right it’s a hard problem and in realpolitics you have to consider that people reject unbounded responsibilities, respond to more selfish incentives and also that a lot of unpleasant people have bombs and guns and their preferences have to be accomodated accordingly, at least to some degree of realistic pragmatism.

The alternative would be nuking everyone to hell, but that also harms children. 🙂

peterdjones, it’s not about the right to exist. I didn’t want to go into population ethics. It’s about the right not to be forced to live under negative circumstances. Considering children are basically captives under all current legal systems, this either has to be changed in a way that addresses children’s low intellectual competence (freedom of information and movement, as Nornagest suggested), or a bundle of negative and positive rights specifically aimed at preventing suffering and harm for low-competence individuals, including minors, has to be built into the legal system.

nydwracu, we all have some form of soft power and political clout to put pressure on our own governments, as well as other governments in the world. In the case of city-states with monarchy, the answer should be not to support the concept, since the brains and personality of individual human rulers are single points of failure, and preventing another despot from achieving power is relatively easier than taking power from a power already in existence. In other words, don’t support monarchy. (Frankly, I don’t understand why intelligent people thought it was a good idea). You can still support sovereign city-states, but their political makeup would require built-in mechanisms to protect the rights of individuals born into them, irregardless of the whims of rulers.

I actually have different reactions from both of yours; I find nothing wrong with (material) inequality but have a deep rejection of hierarchy.

I do agree with Mai La Dreapta that “this is why politics can never succeed,” if I understand the intended meaning. None of these positions are wrong, just like no value judgment can be wrong. They just differ.

Aesthetically, the way I see inequality and hierarchy depends on what causes them and how they’re intertwined. For example, if the hierarchy is the kind of “Don’t get uppity, peasant!” told to a poor/low-status person who tries to do something that’d make them happier (and/or richer), that’s aversive to me. On the other hand, a subdued hierarchy as in a business (where people normally interact as if they were equal, but some hold the power to fire others) is fine by me. Whether inequality bothers me depends primarily on whether it’s arrived at in a procedurally fair way and secondarily on whether it’s meritocratic. Unearned deference and plundered wealth are bad, but procedurally fair hierarchy and inequality exist too.

Perhaps the differences in taste are only partially malleable, but I don’t think that it means that politics can’t succeed. People can reach a state of “X is aversive to me, but it’s right, regardless”. For example, some proportion of pro-choicers feel this way about abortion.

I can see a lot of where you’re coming from: reflexive classist contempt is always ugly. But that’s a vice of people, not of systems. (It is, alas, a very common vice.)

Here is my quibble with meritocracy: being born with high IQ is an accident of birth no less than being born the son of a nobleman. I don’t believe that meritocrats can plausibly claim to be more deserving than old-fashioned aristocrats, as in both cases the system hands out rewards and punishments for attributes that people have no control over.

It’s a vice of people, certainly, but it can be encouraged or discouraged by social norms. If people are taught that their social station is fixed at birth (e.g. “You’re an X, so act like one!” and “They’re a Y and should know their place.”), they’re more likely to believe it.

As for meritocracy – perhaps traditionally “merit” was limited to character traits, but it works better combined with the subjectivity of value: merit is an ability to produce whatever people are willing to pay for. People have only a limited control over their merit, but why should that matter? The biological components of merit aren’t a reward and lack of merit isn’t a punishment, as they’re both prior to the establishment of justice. In contrast, the privileges of aristocrats are handed out by a human-established system, which is subject to questions of justice.

I don’t think anyone is proposing a system where the smart are rewarded just for being smart. The point is that they are able to do (but don’t necessarly do) work of more value …there is a essentially no limit to the potential value of a piece of software, or musical composition. There is also the pragmatic argument that you need to reward your professionals well to encourage them to go through extended periods of education,

Right. I was going to say something about meritocratic systems generating more social value (in theory), but I don’t know that it’s relevant if your objection is to inequality per se, as was the case with the OP. If I understand you correctly, you’re fine with “natural aristocracies” but not with socially-created ones, which is a reasonable distinction on its face and I’m not much inclined to argue with it, except to point out that in practice the two are very hard to distinguish and often have the same pathologies.

Because humans evolved in an environment where there was very little security of wealth, almost all wealth was a wasting asset, and there was very little utility for what wealth could be stored. Anyone who was wealthy was hording, i.e. withholding something from the tribe. Thus we have envy (which we cannot personally act upon any more, mostly), and the desire to eat the rich.

Raising the philosophical skeptical question again. I know we’ve discussed this a lot before, but I think there’s still plenty to say. To try and restate my posistion briefly(including clarifications and revisions):

-A proper response to the philosophical skeptic must not make an assumption that can rationally be doubted. If you assume the senses are accurate, that memory is accurate, or even that you are not being decieved by an Evil Demon, and cannot justify it, then your response to the skeptic fails.

-Infinitism and Coherentism are right out, as there is no reason why an infinitist or coherentist model should correlate with reality. Basically the Isolation Objection. Can also be applied to weak Foundationalism.

-The fact that a model is not practically useful is IRRELEVANT. What is relevant is demonstrating that an epistemic system necessarily gets at actual truth.

-Ad hominem is of course irrelevant.

I’m not sure about this(feedback please!) but I think a brief summary is useful just so people are clear where I’m at. I prefer more complex stuff myself, but not everybody interested in this necessarily does.

The other reason I did it like this is because I figured this made my ‘assumptions’ about a proper response to the skeptic clear.

I’m not sure if you’re trying to imply that skepticism is circular and therefore self-refuting, but if so you’re wrong. The burden of proof is on those asserting any positive claim of knowledge, since without any reason to think we do know the default is that we don’t.

Yes you could argue that none of that can be known because evil demon. But that is precisely my point- we don’t know ANYTHING, as we cannot demonstrate it without assumptions.

Why do we need it to be without assumptions? Because the alternative is that we have Faith in the same sense a fideistic Christian has faith in God in whatever our starting assumptions are. Which means we’re irrational.

See, you still haven’t understood. You can’t make statements like “you’re wrong to say full skepticism is self-refuting” from a position of full skepticism. You can’t say it’s irrational to make assumptions, you can’t say we don’t know anything, you can’t say anything, and not even that. So shut up about it already.

My posistion is not technically unrestricted skepticism. It is more comparable to the Christian heresy of the Double Truth.

I believe that rationally speaking one cannot justify anything, but believe on FAITH (in the Christian sense) in external reality acessible through the senses, memory, etc.

What I am sketching out is what would be required to justify the truths of Faith through reason, in such a way that ALL the truths of faith (that is, common-sense assumptions necessary for a posistion to be non-skeptical by philosophy’s commonly accepted standard of what is and isn’t) could be considered rationally justifiable.

If I understand correctly, your argument is that my system doesn’t work because it cannot lead to any rationally known truths.

To which my response is: So what? It is a pragmatic, not a rational argument, to say that we must assume the possibility of truths. Rationally speaking, the possibility we don’t know anything cannot be ruled out a priori, but must be ruled out by a RATIONAL argument.

If somebody argues we can rule out the possibility we don’t know anything a priori, I can, precisely as you are, simply deny it based on Evil Demon Argument and they lack a rational case. Similiar for your pragmatic, not rational, argument.

> opinion, any true refutation of philosophical skepticism (which I honestly seek to the extent I can, but haven’t been able to find) must be able to overcome that as well.

I’ll try to make this point again: does the non sceptic have to defeat the sceptics, or can they just join in? If sceptics can avoid self defeat by putting forward claims as mere probabilities, or on the basis of faith, why can’t believers in science, realism and scientific realism?

suntzuanime:
You don’t seem to get it. Let me try and put this into another form- a script as if of an argument. Apologies for a poor use of expression, but I don’t know how to put this better.

A: [asserts some common sense proposition, e.g. my memories are real]
B: [Evil Demon Argument]
A: But the Evil Demon Argument can be used to deny the Evil Demon Argument!
B: The point is we can’t know your proposition either. There is always the possibility (not to be confused with probabilities) that the Evil Demon would be real, as we have no rational argument to show that it isn’t.
A: Isn’t it enough to show that the Evil Demon Argument creates doubt in the Evil Demon Argument?
B: No, because the Evil Demon Argument is about creating the possibility of something not being true. It is not about making certain of it’s untruth.

Sure there is a possibility the Evil Demon Argument is false. But there is also a possibility that it is true. And that possibility cannot rationally be ruled out, even probabilistically.

————————-
peterdjones:

IF you put things up on the basis of faith, you’ve conceded my point. In case it’s clear, I have faith in the Christian sense which is quite distinct from any rational analysis.

I have rejected probabilities, and never in this argument have I attempted to claim probabilities as legitimate.

>The burden of proof is on those asserting any positive claim of knowledge, since without any reason to think we do know the default is that we don’t.

Says who? You’re making a grab for the prior, there.

If I attach a high prior probability to the existence of an external reality, and the reliability of my senses and mind, then it is the evil demon hypothesis that has a high burden of proof. You’ve offered no argument as to why our prior is false, merely asserted that we “should” use a different prior.

One good example of a prior people actually propose giving to an AI would be something akin to Occam’s Razor – the least complicated model given the evidence is probably true. In which case, they would reject an evil demon *simulating* a universe as more complex than a universe without any demon. (This entails attaching zero probability to your brain breaking, though, which IIRC is still an open problem.)

In practice, the human brain starts out with too *high* a confidence in (what it perceives to be) “reality”, and updates downward when it encounters e.g. dreams.

Just like it starts out with so many other things that are, essentially, a result of epistemic luck – you can’t persuade a rock.

I think the dialogue could go more like:
A: [Common sense claim]
B: But that could be a deception of a weird Demon.
A: And so what?
B: So you can’t KNOW that [common sense statement], because there’s a possibility of it being wrong.
A: The logical inference you are making, that the weird Demon possibility implies impossibility of knowledge, is itself subject to being merely a trick of a weird Demon.
B: Yes, I admit that. I don’t know that knowledge is impossible, I can only assert that there’s a possibility that knowledge is impossible. But my criticism still applies to [common sense claim], doesn’t it? You can’t know that [common sense claim].
A: It’s entirely possible that I can, by your own admission.
B: But making the leap to actually claiming to know [CSC] is faith, not reason.
A: Is it?
B: Yes.
A: Sure that’s not a Demon tricking you about what faith and reason are?

And so forth. The Skeptical position may very well be entirely reasonable and everyone else insane, but I don’t think any skeptic can ever say this to try and cast doubt on other positions, since to do so would be to make a positive claim of the form “You cannot claim [CSC].” This is weaker than saying “Skepticism is self-refuting,” but it does nonetheless overcome it, since the position cannot make any challenges to any others.

MugaSofer:
To concede the possibility of a prior being in any way rational would be to concede my entire case, and would be a ludicrous posistion for me to take.

Any prior must come from a set of Rules dictating how probability works. These Rules need justifying. This cannot be probabilistic justification or the argument is circular.

—————

peterdjones: See the fricking obvious point that probability is dependent upon rules I’ve made to you elsewhere.

————————–
Dirdle:
Metaphorically, it all collapses into non-knowledge. Remember, we don’t have probabilities here as they’re subject to amongst other things Evil Demon argument.

It is possible one way and possible the other. But it is neither rational nor knowledge to positively claim it one way or the other.

What CAN be positively said is that IF our reasoning is valid at all, which the non-skeptic must assume, that we cannot justifiably know that X, whatever X is, and that we do not know that we can justifiably know ANYTHING.

But it is neither rational nor knowledge to positively claim it one way or the other.

How do you know that? Couldn’t it just be another Demon’s trick?

If you say “But it might not be rational to positively claim that knowledge is possible,” that’s a different statement to “But it is enough that it might not be rational to positively claim that knowledge is possible.” The latter is missing an additional ‘might’ – it only MIGHT be enough that it might not be rational etc.

If you don’t know that it’s not rational to positively claim that knowledge is possible, how can you object when people (even including yourself, maybe) say it is? It may seem like “they/I can’t know it for certain,” but couldn’t that just be another confounded trick of this smug, omnipresent Demon?

(The previously-made comparison to “What the Tortoise said to Achilles” appears more and more apt. I can concede forever that it could be the case that the possibility of the uncertainty surrounding the open question of […] impossibility of knowledge implies that I can’t rationally believe in anything, without ever conceding that I must necessarily believe that it would be irrational to believe anything – after all, that could just be a really doggedly Evil Demon)

Skeptics are less justifying and more casting doubt on everybody else’s. They (or at least I) are trying to demonstrate a possibility of wrongness. Without probabilities (which I also cast doubt on), this means we simply can’t know one way or the other rationally.

To historicize this discussion: I think that Carinthium is putting forward more of a Pyrrhonian version of skepticism than a Cartesian one. The Pyrrhonian skeptic doesn’t put forward that we can’t know anything as a positive claim, but rather takes a stance of agnosticism towards everything. His skepticism is less about advancing a positive thesis as reacting in a certain kind of way to any positive claim that is made.

peterdjones- Not so. I am trying to show that the non-skeptical model ultimately comes to assumptions that cannot be defended, and is in that sense self-refuting.

Troy: Sort of. I’d revised gradually over time, including over some of my discussions here.

Restricting myself to what I think can be believed with justification on reason, yes I am a Pyrrhonian sceptic but for very different reasons so I don’t want to be classified too close to them. You’re right that I’m focusing on knocking down claims to justification rather than anything positive, and postulating things such as a Descartes-esque Evil Demon to show that nobody can demosntrate they aren’t true.

Antiscepticism us only self refuting if it has both arbitrary starting points and a rule against arbitrary starting points. Without a rule against arbitrary starting points, realism ,is self consistent if unfounded, and that is a better place to be, epistemologically, than self refutation….and a number of forms of scepticism are indeed self refuting.

peterdjones: The rule against arbitrary starting points is an unusual case.

The rule against arbitrary starting points is not an arbitrary starting point, because it is an absence of assumptions.

If the starting point is arbitrary, then by definition there is no rational superiority of our starting point over it’s exact opposite (or if not that, then at least some contrary posistion). Therefore it is not rational to assume one is better than the other (barring some new argument demonstrating it is rational).

See my below reply to Dirdle alone, which has relevant to you as well.

———————-
Since it seems I can’t put my reply directly next to Dirdle’s, I’ll put it here. It’s understandable if he doesn’t see it in my view, though.

Slight revision to my thoughts: There is a difference between “Knowing” something and “Knowing that we know”. To “Know that we know”, we must both have the rational argument and KNOW, rationally, that it is a rational argument.

If a starting point is arbitrarily chosen, even if it turns out to be rational for an unknown reason we cannot know that we know it is valid. Therefore it is not a problem to reject it as a starting point.

Let me restate my basic view in non-technical language. I will state things in terms of the first person, but I assume your experiences are sufficiently similar to mine that you can translate.

I can be certain that I am having various experiences, e.g., as of sitting at a computer and typing, that I have various apparent memories of the past, and so on. I cannot be certain that these are veridical — e.g., that I really am sitting at a computer, that I really did oatmeal for breakfast this morning — but I can be certain that I have experiences that represent these propositions as true: that these propositions seem true to me, if you like.

The best explanation of these experiences of mine is that they are, by and large, veridical — that there is an external world roughly of the kind that my experiences represent. This explanation simply and coherently explains my experiences. Any rival explanation either explains my experiences less well, or is intrinsically less plausible because it is more complex. An evil deceiver scenario, for example, is more complex because it posits the existence of something always unobserved, namely the deceiver. It also explains the data less well inasmuch as we wouldn’t expect the deceiver to be so perfect at deceiving me (unless we build into the hypothesis that the deceiver is perfect at deception, in which case we’ve made it more complicated and so lowered its initial plausibility).

This does not entail that skepticism is false. It just entails that it’s probably false, because common sense realism is the best explanation of my experiences in the sense explicated above.

I think that this argument can be made more rigorous in terms of probability theory, but this gets the basic idea across.

Now I’m no longer swamped at university, I intend to go look up logical probability. I haven’t yet, for which I apologise. But a few preliminary thoughts.

The key here is, of course, how to justify each of the assumptions any theory of probability has. If these justifications are probabilistic, the theory is circular- it assumes the very existence of probability it is supposed to justify.

Logical probability has a very hard task ahead of it. It must somehow get around the Evil Demon as an objection to every single base assumption it makes.

I should also point out that the Evil Demon argument can be used to demolish 1+1=2, let alone less basic epistemological principles like the ones you posit.

Other points:
-As Van Frassen points out, just because something is the best explanation doesn’t imply it’s a good one.
-If we are being deceived, we don’t truly know the outside world. So how do we know perfect deceptions are impossible?
-Even ignoring all that, WHY do we consider a more complex explanation an inferior one?

Is it actually logically possible for a mind to exist that can be mistaken about 1+1=2? (in the abstract sense, people get math questions wrong but that is not because they cannot intuitively understand math at an abstract level, but because it gets too complicated to do quickly in the human brain). I kind of have a feeling that it is not.

What would it even mean for 1+1 to equal 3? What would that entail? It seems to be totally incoherent as an idea.

It is inconceivable to us, sure, but what does that tell us about the real world? Haven’t you read the Sequences? Your brain is prone to all manner of errors; there’s no reason to suppose that its logical faculties alone are pristine.

I can think of one object, and I can think of three objects. I cannot think of any number of objects between one and three without some of the objects being broken. I can see one object sitting on the table. I place another next to it. There are now three objects sitting on my table. I can give one to my friend Andrew, one to my friend Beatrice, and one to my friend Clara. If I take one X away from XXX, there is only one X. If I add one X to that, there is again XXX.

Assuming (pace Jaskologist) that mathematical logic works and that I understand it reasonably okay, 1+1 = 3 implies P for any proposition P. Because if 1+1 = 3 then 1 = 2 and 0 = 1 and then any two numbers are equal.

None of that addresses what I think is going on in this thread, which is the hypothesis that maybe our reasoning ability is so fundamentally haywire that we can’t listen to anything we think we know. Which is possible, and not refutable.

But as Luke Somers says below, the correct response is “well, if we can’t reason coherently, we can’t reason coherently. So let’s assume we can reason coherently. Then we deduce that…”

(This is even more generalizable. You can’t actually justify your epistemology, because “epistemology” is a description of what “justification” means. You can never prove that induction works. But you can accept that you believe in induction, and determine that your epistemology is reasonably consistent, and then continue on).

Some mathematicians would say that it means that you have a contradictory formal system, to which no real-life situations are applicable, thus use of said system is pointless. Inconsistantcy resultsw in ability to both prove and disprove every statement, so the system has no value from a pure perspective. (Different mathematical philosophies may give different answers.)

Lambert: yeah, you have a contradictory formal system and thus all propositions are true. I was just working through (part of) the derivation of that fact that we necessarily have a contradiction there.

I’m not creative enough to come up with any rules for paraconsistent arithmetic on the fly, but while I suspect it’s possible, there’s also the easy answer to Jadagul’s point. Everybody has inconsistent beliefs, and yet nobody believes everything, because we don’t derive all the consequences of all of our beliefs. The obvious option for the character NonsignificantName is playing is not to trust a derivation with as many steps as Jadagul is taking (obviously a mistake must have been made somewhere, since 1 isn’t equal to 0, but it can’t be that 3 minus 1 isn’t 1, since it obviously is. Sadly, it’s just too much work to figure out where it actually all went wrong).

Protagoras: yeah, I agree. In the vein of Scott’s old post on epistemic learned helplessness (which is my actual favorite post of his, but I didn’t put it on the survey because it’s not a SSC post), when someone shows that two of your beliefs are contradictory, you have three options: reject your first belief, reject your second belief, and reject the argument. Depending on how sure you are of the two beliefs and how hard to follow the argument is, rejecting the argument could easily be the epistemically rational choice.

Theories of probability hardly ever propose the existence of anything. They usually say something like “if you want to handle reasoning with uncertainty, here is a way to do it”.

Admittedly, that assumes the “existence” of epistemic uncertainty…although that inwardly in doubt, and is what scepticism promotes anyway.

Admittedly, too, there are issues the existence of non-epistemic uncertainty …quantum indeterminacy and so on. But I have never seen the existence of ontological probability argued by providing a probabiliy calculus.

On the face of it, scepticism should have like impact on probability. Sceptics can’t claim that the evil demon actually exists, so all they can say is, that because of the possibility of the evil demon, no claim is certain…which is not news to the probablist. However, I think there are areas where even standard probabilities are too confident. Failing to take unknown unknowns into account, a la Taleb; not being able to quantify how far from 1.0 your best theories are, a la van Franzen, etc.

You don’t seem to understand what I’m saying. Your response only works if I were denying probability as measuring states of beliefs.

Of course I can’t deny probability as a means of measuring states of belief as a ‘language’ of sorts. The problem is that a system of probability is said to have some connection with the world- we can “know” in a sense that the world probably exists. Therefore it has a connection with reality.

How are we to show that our subjective feelings have any relation, however loose, to what actually exists, or even probably do?

Any system of probability MUST have implicit rules of sorts, such as the rule that a simple theory is more credible than a complex one which explains just as much. For the theory to be justified, EVERY ONE of these rules must be justified. If the rules are only probably justified, it is circular.

And yes I’m getting to logical probability reading. But I don’t hold out much hope, as I’m getting the impression the so-called axioms are merely false ones that don’t capture the implicit rules.

You are making claims, and they cannot be certain claims, because you deny certainty. So you are making probable claims, so you implicitly accept uncertain truth, even while you explicitly deny it, You need to explain where you are getting your knowledge from much more than your opponents do, because self contradictory reasoning is much worse than merely contradictory reasoning.

You are making claims, and they cannot be certain claims, because you deny certainty. So you are making probable claims, so you implicitly accept uncertain truth, even while you explicitly deny it, You need to explain where you are getting your knowledge from much more than your opponents do, because self contradictory reasoning is much worse than merely circular reasoning.

I should also point out that the Evil Demon argument can be used to demolish 1+1=2, let alone less basic epistemological principles like the ones you posit.

I do not grant this premise. Contra Jaskologist and others above, I think I can directly intuit that 1+1=2. In Descartes’ language, I clearly and distinctly perceive its truth; in Russell’s language, I am directly acquainted with the fact that 1+1=2. So I do not think it is possible that a demon has just made it appear to me that 1+1=2 when in fact 1+1=3.

I will say similar things about Cox’s postulates from which his axioms of probability are derived.

-As Van Frassen points out, just because something is the best explanation doesn’t imply it’s a good one.

On this point we are agreed. This is where the formal probabilistic phrasing of the response is better than the informal best explanation phrasing of the response.

-If we are being deceived, we don’t truly know the outside world. So how do we know perfect deceptions are impossible?

I never claimed that they were, merely improbable.

.-Even ignoring all that, WHY do we consider a more complex explanation an inferior one?

I’m afraid you won’t like my answer, but I think that this is also a fundamental a priori principle. In some cases, too, it follows from the axioms of probability. For example, it’s plausible that A&B is a more complex hypothesis than A, and (assuming that P(B|A) < 1) A&B is necessarily less probable than B.

No apologies necessary — I am solely responsible for my spending my time debating on the Internet rather than working on my professional obligations!

I think that I recommended the first several chapters of Keynes’ Treatise on Probability, as well as Tim McGrew and Richard Swinburne’s work. If you really want to focus on deriving the probability axioms, though, I’d recommend the first two chapters of Jaynes’ Probability Theory: The Logic of Science. (Google will quickly find these for you in PDF format.) His derivation of Cox’s axioms is informative but not entirely rigorous; Van Horn’s discussion at http://ksvanhorn.com/bayes/Papers/rcox.pdf is more rigorous. (Also see Van Horn’s website for errata and some notes on chapter 2 of the Jaynes.)

Warning: both of these texts involve complex math. Unless you have done graduate-level math work, it is unlikely that you will be able to follow all the proofs. (I could not.) If you are intelligent and motivated, however, you should be able to understand the results.

Also, you may be aware of this, but Cox’s axioms are not the most widely used ones in probability. Kolmogorov’s axioms (mentioned here, for example, http://plato.stanford.edu/entries/probability-interpret/) are more widely used. These two systems have basically the same implications but in my view Cox’s axioms are superior (for a variety of reasons) as axioms for a logic of plausible reasoning.

There are plenty of responses to this based on common-sense ‘reasoning’, but the strictly rational response is that you are assuming a correlation between truth and usefulness which has not in any way been demonstrated.

Yet, how could it be otherwise? reality is just the name we give to the thingy that determines our observations. Truth is the name we give to a correspondence between our beliefs and reality. We are interested in true things in order to negotiate the reality we experience.

So we start with we are experiencing something and go from there to determine how we want those experiences to play out.

How on earth can you legitimately apply an Ad Hominem in this case? The character of the person concerned is one of the very assumptions the skeptic doubts you can know, after all. Therefore any Ad Hominem argument against the skeptic is also a circular argument.

The example you linked to is saying that we should consider a person’s beliefs weaker evidence if we know that they are a liar (or otherwise unreliable.)

[Actually, it’s saying that we should not trust them to reliably precommit to things, which has nothing even vaguely to do with the ad hominem fallacy. But let’s steelman that to include other statements.]

For example: if Bill Clinton tells us that a box we haven’t seen inside is filled with gold, the fact that he is a known liar will somewhat reduce our probability estimate that there is actually gold; relative to what it would have been if someone else had told us that.

But a logical argument is evidence for itself, independent of whether the speaker’s beliefs are strong enough evidence that we should already believe it.

Rejecting an argument based on the speaker is akin to looking inside the box, finding gold, and then telling Clinton that he’s a liar so we have no reason to believe there’s gold in there.

Tiny nitpick; it seems that we should only reduce the credence we give to the claim of a known liar if our evidence suggests that they lie more often than we would expect, compared to the baseline amount of lying we should expect from any randomly chosen person. I know a witness can be demolished in court if caught in only one lie, but that’s one of the many ways in which juries seem to make poor use of the evidence presented to them (brought to mind because I’m not aware that the evidence in the case of Clinton supports thinking he lies more than an average person, certainly not more than an average politician; he’s just been a major public figure and so has been scrutinized enough for some of his lies to be detected and publicized as such).

A. A person who has been caught in one lie, even minor or justified.
B. A person who has been caught in many, or a few very important, lies about subjects in Category K.
C. A person who lies about almost everything, almost all the time.

If B is making a claim about a subject in Category K, his record is a good reason to doubt his claim. But A’s record of a minor falsehood in Category S, is not much evidence against his credibility in Category K. (A may actually be more careful of his speech than average, because he knows he will be closely watched.)

‘Liar’ is a non-central fact about A, as ‘criminal’ is a non-central fact about Martin Luther King, Jr.

The problem with your model is that ANY model of probability which makes probabilistic truth claims implicitly must have Axioms of Probability. No system of probability can exist without these either explicitly or implicitly guiding it.

The attack then switches to the axioms. If the axioms can only be justified on probability, then the model is circular as the legitimacy of probability as a method is what it is asserting, and yet is an assumption of it’s argument.

Probablility doesn’t have to apply to reality. It can apply to subjective uncertainty, which sceptics can hardly deny

It would be very helpful to me if you would look at an actual axiomatisation of probability becausee they don’t do whot you say they do. They don’t make any claims about reality, and don’t exactly make truth claims. Theyset up rules for handling uncertainty, And sceptics must be using something like them, because sceptics are using reasoning to put forward uncertain claims. (The only real alternative is to USE probablist reasoning. But formalisation doesn’t add realism , it adds clarity)

I was going to look it up, but the more I’m seeing from you people the more I doubt that such false axioms would mean anything.

ANY system of probability must have rules of the sort “X makes Y more probable”/”X makes Y less probable”. This is a truth claim, if an unusual one, about reality. If it cannot be justified, it is like the isolation objection to Coherentism- no connection to reality whatsoever, even a probable one.

——————————–
It depends on the skeptic, but in my case I am trying to show that ordinary ‘reasoning’ is ultimately self-refuting and proves itself irrational.

Some, such as the Coherentist, would say that ordinary reasoning allows for several unproved axioms. But the very existence of these unproved axioms allows for a version of the Isolation Objection to Coherentism- no connection to reality whatsoever, as we don’t know the axioms are true!

Yout are conflating two different issues. An axiomatisation, might ,for instance define evidence as what makes a hypothesis moret that’s a rule, not a statement about reality. You can also apply probability to make statement about reality which turns out to be false, due to lack of correspondence …. but that isn’t a problem with the axioms or rules of probability.

Yout are conflating two different issues. An axiomatisation, might ,for instance define evidence as what makes a hypothesis moret that’s a rule, not a statement about reality. You can also apply probability to make statement about reality which turns out to be false, due to lack of correspondence …. but that isn’t a problem with the axioms or rules of probability.

And you still face the problem that you are happily using logic, which us a special case if probability,s

A much softer target than the axioms of probability, is the prior. We have an implicit bias towards Occam’s Razor and a memory full of data consistent with it, but that’s all we have to go on. We might be able to update towards Occam’s Razor if we had a prior over priors, but that’s passing the buck.

That would be a ludicrously circular argument, as you are trying to attack probability using empirical evidence (claims we have a bias towards Occam’s Razor) when empirical evidence is part of what I’m attacking.

One of the justifications of Occams razor is that, all else being equal, you should believe theories with the lowest number of premises, because the less you commit to, the less wrong you will be. That is itself a form of scepticism.

Calling something skepticism doesn’t give it any relation to the Skeptical Hypothesis we have been discussing thus far.

Occam’s Razor is like 1+1=2 in that it is vulnerable to the Evil Demon Argument. It is also partially inductive, in that it depends on assumptions about the nature of the brain, in most versions. Admittedly your version gets around that, though.

In summary, peterdjones your point seems fairly irrelevant here. This isn’t about some sort of Skeptical Identity, but truth and falsehood.

peterdjones: It is possible to justify an assumption ‘from nothing’, as it were.

To put it another way, what is reason and why do we want it? Rationality by definition involves a kind of argument that necessarily corresponds to reality, and we want it so we can know truth.

I establish we should care about this definition because all other definitions are pointless as they don’t have anything to do with what actually exists.

An argument which starts from an assumption has no necessary correspondence to reality, because maybe the assumption is false. If there is some reason why the assumption CAN’T be false, then it can be reformulated as a rational argument in which no assumption is required.

For all we know an Evil Demon could be decieving us on the point, but because they might not be we know that we don’t know the argument is true, hence it is not rational.

————-

Even if you ignore all this, because your posistion uses assumptions it is merely faith-based. It cannot be considered rational because the assumptions are rationally indefensible. If they weren’t, they wouldn’t be mere assumptions.

Rationality has little to do with correspondence, let alone necessary correspondence. You can even have rational arguments against the correspondence theory of truth.

Of course, once you introduce a requirement fr something to follow necessarilly , everything falls short. But you can’t justify the requirement for 100% necessity ; 90% reliability is still pretty good.

I went through a phase like this. I’ve since concluded that if you’re looking for airtight justification, I don’t think you’ll find it. I have a number of intuitions for believing this (besides getting bored with chasing infinite regress).

I vaguely remember reading an analogy of the Uncertainty Principle one time. The idea was observing a particle necessarily involves interacting with it. If you’re observing a macro object, photons hardly affect it. But if you’re observing a particle with a photon, that’s like hitting a ping-pong ball with another ping-pong ball in a dark room. Therefore you will never have perfect information, even on a classical level.

There’s also Scott Aaronson’s explanation of QM which (if I’ve read it right) says the universe injects probability into observation on an ontological level. And by injecting complex probabilities into reality, the universe is able to gracefully sweep debates of discreteness and continuity under the rug (which is why Democritus is relevant). My point being if you can’t eliminate all uncertainty on an ontological level, you’ll have a snowball’s chance in hell eliminating all uncertainty on an epistemic level.

In case you haven’t seen this one yet (I bet you have), What the Tortoise said to Achilles. Yep, that’s right. Not even our beloved Modus Ponens is safe. It’s not assumed-during-discussions because it’s sacrosanct and immaculate of uncertainty, it’s assumed because it’s empirically useful and Just Works ™. Like Eliezer said, eventually you’ve gotta stop passing the buck and start asking “what is the buck?”

And consider Cox’s Theorem. I don’t think of logic as an extension of probability, I think of logic as a simplification of probability. Analogously, software that’s been guaranteed to work by a theorem prover is 100% guaranteed to work… except if the power goes out, or a meteor hits, or that weird off chance that a diode goes rogue (which usually doesn’t happen since engineers crank voltage through circuits enough that computers can work reliably). Logic is an app running on an OS of Simplifying-Assumptions over a kernal called Probability.

Without airtight justification, you do not have any knowledge. You do not know anything rationally, as your argument makes a critical assumption that is not necessarily true (or is fallacious in some other way).

Paragraphs 2 and 3 are logically irrelevant to the Skeptical question.

It is true Modus Ponens cannot be justified on pure reason. Which is why I am looking for something that CAN. Your appeal to Elizier Yudowsky is logically speaking irrelevant.

If you take Elizier’s approach, then you are a religious man without knowing it. You are believing on a religious type faith in some sort of base premise. After that, calling yourself rational is silly because your ideas have no connection to reality.

Cox’s Theorem is irrelevant here. We are looking for some necessary correlation to reality, making everything you say irrelevant. A probability that is not founded on certainty is useless rationally.

Rationality does not have to have infinite certainty, but it DOES have to be non-circular. A system of probability that depends on probability as an axiom is circular.

How on earth can you construct a rationality that is superior to faith without a necessary correlation to reality? If it doesn’t involve a necessary correlation with reality, you can’t say that it is better because it is closer to the truth. So what CAN you say?

You’re so hung up on this idea of “necessary correlation to reality”. What is reality, if our thoughts don’t correlate to it, and why do we even care? If it makes you feel better, you can imagine an implicit “or maybe not, but whatever” after everything everyone says to account for the possibility that everything’s nonsense.

Because there are many possible skeptical scenarios which are OUT OF OUR HANDS. There are practically infinite possible ways (if not infinite) we could die or be screwed over and there is NOTHING we can do about them!

I believe in the correspondence theory of truth for what it’s worth. I’m not certain of which type within that fairly narrow zone, but I believe in it. Elizier’s theory may be simplistic, but it’s close enough that if you understand you should see that it’s objective regardless of our thoughts. That should help to clarify why external Reality is important.

Finally, of course, I want to be able to call myself a rational person but both others I talk with and the metaphorical voices in my head keep calling me an irrational idiot for being able to solve the Skeptical Problem. I can’t find any way to justify calling myself a rational person if I can’t.

The last one is admittedly a personal question of my psychological identity, but the rest applies to EVERYBODY.

Because there are many possible skeptical scenarios which are OUT OF OUR HANDS. There are practically infinite possible ways (if not infinite) we could die or be screwed over and there is NOTHING we can do about them!

Yeah bro, it is precisely because they are OUT OF OUR HANDS that it’s so easy to say “or not, but whatever”. If we could do something about them, we might want to do something about them, and it might be negligent to ignore them. But since we can do NOTHING about them, we can just totally factor them out of the equation. Who cares?

Although I disagree with Elizier Yudowsky and the Sequences that we can know reality, I basically agree with him on how to define ‘reality’. See Correspondence Theory of Truth for more details.

suntzuanime:
Let me use a scenario to illustrate how horrific the implications can get.

Say I am a person walking down the street trying to get home. Maybe walking down the street will get me home. Or maybe walking down the street will kill me, and walking the opposite direction will get me home. Or maybe dancing will get me home or incanting some ritual.

There are so many possibiltiies at work and so many combinations of assumptions. What you are doing is picking one, completely arbitrarily from a rational perspective, and saying “I’m going to assume this!”

But there is absolutely nothing about the set of axioms (memories accurate, rationality accurate etc) you have chosen that makes it rationally superior to one of many, many possible sets of axioms.

PS: Feel free to keep arguing, but of course you’re not going to persuade me. I have as I said, an additional reason of identity to keep going.

Plus I have a lot of irrational friends who, whenever I try to argue rationally, use the Sceptical hypothesis as a refutation and get on with their irrational behaviour. I need an airtight answer to them.

————————–
peterdjones: I figure somebody on this site should at least understand Elizier’s notion of what it means for something to be real. Using that notion, it is flawed to ask what Reality corresponds to.

A claim can be wrong because the concepts involved are wrong . The burden is therefore not entirely on the antisceptics. You can always argue .that truth, knowledge and reality aren’t available by setting the bars high, but that’s not impactive unless this concepts are the lines the realists are using.

Yup! and you’ll just have to deal with it. Do you think the cosmos owes you any justification? Do you think the Invisible Pink Unicorn is going to hand you Truth on a silver platter? Do you think Maxwell’s Daemon keeps a log of each and every event? Let’s face it, the universe was not tailored for us to solve with certainty.

We’re given some sensory perceptions and we can choose a few axioms. That’s it. Some organisms have more senses and some don’t have any. Our phenomenological situation may not be ideal, but there’s little reason to expect it to be. Yes, reason can take you pretty far, but at some point you’ve gotta abandon the raft and get on with your life.

A probability that is not founded on certainty is useless rationally.

P.S. Can we prove Probability Theory correct using air-tight non-circular reasoning minus an axiomatic framework? Nope. Is Probability Theory correct? Well, given the evidence of how useful it’s been in predicting “whether the sun will rise again tomorrow morning”… probably. Some call this “prediction”. Others call it “inference“. But if you want to call this “faith”, I suppose you are entitled to your definitions.

The idea the cosmos somehow owes us a justification is of course a straw man. It is also a misunderstanding to say I want a solution with certainty- for a solution to be rational, it needs to not involve any assumptions. However, in theory such a solution could first demonstrate a theory of probabiity then show that the universe probably existed.

Your second paragraph is a misunderstanding of my point. I never said not to believe in things. I said that such a belief is ultimately an irrational leap of faith in the exact same sense that Christianity is said to be believed on faith. What I was TRYING to do on this thread was seek a rational solution so no leap of faith would be necessary.

The point at dispute is not definitions, but whether or not my assertion is true that, in the EXACT SAME SENSE that in modern culture a religious believer is said to believe on faith, it is true that we believe in the existence of the world on faith. Rationally speaking, the two choices are identical.

Ah, okay. I accept that I’ve misunderstood you. So what you’re actually looking for is confidence that “science/rationality” is epistemically superior to “dogma/faith” – as opposed to looking to escape the Matrix. And you’d prefer a blessing from the Ghost of Perfect Emptiness rather than Azathoth. Is this correct so far? Assuming I’m reading you correctly:

I agree. Rationality isn’t any better than faith. There’s no reason to a privilege a particular set of assumptions in Cantor’s Paradise over another set of assumptions. No sarcasm.

If you want to follow Occam’s Razor, then follow Occam’s Razor. If you want to worship Christ, then worship Christ. If you want to convert to Wicca, then convert to Wicca. If you want to start the Church of the Fonz, go do that. People are entitled to their opinions.

Plus I have a lot of irrational friends who, whenever I try to argue rationally, use the Sceptical hypothesis as a refutation and get on with their irrational behaviour. I need an airtight answer to them.

The lesson in What the Tortoise said to Achilles is that if someone steadfastly refuses to accept the basic premises of logic, there’s nothing you can do to force them. You can always try to convince them, but there are no guarantees.

The lesson in Axioms (wink wink) is that no one can stop you from believing whatever you want.

Okay, back to Azothoth-mode. If you want to sway your friends, I suggest you be so awesome at life that your friends can’t help but envy you and your powers of rationality. If this doesn’t work, then maybe rationality isn’t the winning way.

I still need to try and find a set of ‘true assumptions’ which somehow are superior to priviledge. That’s the goal, not necessarily ‘rationality’ in the ordinary sense as long as it fits my description.

I don’t know this, but I worry that some of them would still not like being rational and still cry “Skeptical hypothesis” anyway.

Assumptions aren’t universally true. They’re just propositions relevant to the particular subject matter. Like when someone says “parallel lines never cross”, people will generally assume he or she is discussing a subject in the context of Euclidean Geometry. If you protest “but what about the Earth’s meridians?”, that’s not really relevant because you can’t accurately model Earth with Euclidean Geometry. You’re talking past each other. Stick to the subject matter.

Think of assumptions as filters (or control flow). The way math works is you start with some core assumptions, and then derive theorems. It’s neat because if some system-of-tangible-objects passes through all the filters, you’re free to apply all relevant theorems. But if the objects don’t make it past one of the filters, you’re not allowed to apply the theorems on pain of stupidity.

For example, can we do arithmetic on the color green? I suppose we could define the successor function to mean an adjacent primary color. But then wouldn’t S(S(S(g))) = g? That violates the assumption S(n) != 0.

Instead of looking for Universally True Axioms, you ought to be asking “Does this particular model apply to the matter at hand? Does this actual system behave like any of the abstract systems I know of?”

but I worry that some of them would still not like being rational and still cry “Skeptical hypothesis” anyway.

So this discussion was never about logic, but general rhetorical tactics. Your friends are using what’s called motivated reasoning. No amount of logos alone will persuade them.

If I were in your position, I’d stop appealing to “reasons why foobar is accurate” and instead teach them to yearn for the vast and endless sea. Read a book on becoming a salesman or something. But beware, this can easily land you Dark Arts territory. Tread carefully.

The brain runs on evidence, the christian brain is no different. The evidence they use is the evidence of being raised christian, or the good feeling they get at contemplating ‘faith’ or whatever, but it is all based in reality. The reason this is called ‘faith’ instead of justified belief is because it is shitty evidence. If you believe say that holy water will cure your cancer based on the evidence that you think god works that way, then on average you will be wrong. If you use methods that have been developed for obtaining correlation between your beliefs and reality, then you will be less wrong on average and experience less surprise

FullMeta_Rationalist:
An assumption doesn’t have to be like you say. Some assumptions, such as, “The senses are valid”, are in practice universial assumptions. I’m trying to find ones I can say are true even in their context without having to appeal to some ‘deeper’ base assumption.

I can’t do what you suggest. I would feel far too awkward advocating for a posistion I know perfectly well is irrational. I’ve argued in the past, but whenever I deliberately try to argue the irrational I feel far too awkward.

peterdjones: I assume you know of the Isolation Objection to Coherentism? And other problems with Coherentism?

They apply to your system.

s2: You’re making an irrational circular argument for your ideal of rationality by assuming that Christians are on average more wrong, and thus assuming your memories and a whole lot of other things are accurate.

memories plus a whole lot of other things are what make up reality. Ok, we might not be able to trust our memories all the time, but by aggregating various sources of information often a coherent picture can be formed.

Since we have no experience of reality except through our experiences, I don’t understand how you can talk about reality as separate from them. Reality must be the thing that gives rise to certain experiences else we wouldnt be able to talk about it in the first place.

So we start with the present experience of experiencing reality and our experience of memories of reality being coherent and use these to predict future experiences. And when the predictions are wrong they often seem to be wrong in systematic ways which can be corrected and then we predict forming memories which see less of a difference between prediction and experience which is what we name truth.

I could be fed all these experiences by an evil demon but I have an experience of these experiences forming a coherent, predictable whole which allows me to operate, and so I will.

I’m going to copy and paste an earlier post peterdjones might have missed.

peterdjones: The rule against arbitrary starting points is an unusual case.

The rule against arbitrary starting points is not an arbitrary starting point, because it is an absence of assumptions.

If the starting point is arbitrary, then by definition there is no rational superiority of our starting point over it’s exact opposite (or if not that, then at least some contrary posistion). Therefore it is not rational to assume one is better than the other (barring some new argument demonstrating it is rational).

See my below reply to Dirdle alone, which has relevant to you as well.

———————-
Since it seems I can’t put my reply directly next to Dirdle’s, I’ll put it here. It’s understandable if he doesn’t see it in my view, though.

Slight revision to my thoughts: There is a difference between “Knowing” something and “Knowing that we know”. To “Know that we know”, we must both have the rational argument and KNOW, rationally, that it is a rational argument.

If a starting point is arbitrarily chosen, even if it turns out to be rational for an unknown reason we cannot know that we know it is valid. Therefore it is not a problem to reject it as a starting point.

—————

s2: You don’t even seem to understand the problem. If reality is, say, the construct of an Evil Demon, it is an act of faith to assume it won’t be taken away from us at any given time.

We also can’t aggregate sources of information if our memories are 100% untrustworthy, which is a possibility that must be considered.

In addition, if we are to trust our reasoning at all (which I don’t assume, but most do), then a correspondence reality, in the sense of statements which are true in the Correspondence conception of truth, must exist. Start trying to create a self-consistent hypothetical concept in which nothing is Correspondence-True, and you’ll come into problems very quickly.

Anonymous- Rationality is important because of it’s necessary connection with the truth. A single step in the reasoning being wrong or unfounded can lead the whole argument off in a completely false direction.

Twelve hours a day of Ruby doesn’t leave much time or many brain cells for anything else, but —

I derived my way out of ethical egoism a long time ago by realizing that certain sorts of altruistic behaviors were in my own interest (for both game-theoretic and psychological reasons), so you may be able to take a similar strategy to escape skepticism — unless you have a way to cast doubt upon your own present preferences.

Say the skeptic is trying to figure out whether to get Chinese or Mexican food for dinner. He has the impression — which could well come from an evil demon — that, in the past, he liked Chinese food and hated Mexican food. Where does he go for dinner?

If, when you go to get dinner, you rely on memories of your food preferences, that demonstrates that ‘the taste of food’, ‘the past’, etc. are all things that are useful to your present self. You can’t prove truth, but you can prove utility, and if truth is useful, you may as well act as if the things are true.

Ethical egoism is another story. Your argument there is interesting and credible.

With skepticism, however, it is another story. I will allow that I do believe on faith (in the religious sense) in the existence of the world. But rationally speaking we do not in fact know that utility will come.

Say I am at a shop, considering whether I order Chinese or Mexican food. Perhaps if I order Chinese food I will instantly die on the whims of a random God and if I order Mexican I won’t? Or perhaps the other way round? (And of course I could be deluded that I have more choices than those)

For any hypothesis, under a skeptical world view the opposite is possible unless the opposite is incoherent. Hence, we can never know any action will lead to utility.

Sure, it’s not a rejection; it’s just a layer over it — even if ethical egoism is true, it’s going to lead me to act as if it’s false, so what difference does it make whether it’s true or false?

If I write a brainfuck interpreter in C and then write a hello world program in brainfuck, am I writing a hello world program in C?

If ethical egoism has me behave in the exact same way as altruistic moral theory M, and I can work out M starting from ethical egoism and then go on to address ethical questions using the toolset of M without ever having to touch ethical egoism itself, am I behaving according to ethical egoism or M?

That sounds self-contradictory. Many different ethical theories agree about how you should act in many situations, so does it matter which one you follow? Yes, because differences in the ethical theory you subscribe to are differences in how you justify your actions. For example, a utilitarian might reject slavery because it’s contrary to world utility maximization, while a Divine Command theorist might reject it because it is contrary to the commandment to love thy neighbor as thyself. However, most ethical theories disagree on some points.

It is conceivable that by some coincidence, you could find two ethical theories that agree on every act, but it still matters which one is true or false because the foundations of the theories would be different. For example, there could be a Divine Command theorist who believes that God says to do everything an ethical egoist would do. He would then act just like an ethical egoist, but his views would be vulnerable to attacks on Divine Command, whereas an actual ethical egoist’s views wouldn’t be. But more realistically, most pairs of ethical theories disagree about what actions they recommend in some situations, so there’s even more of a difference between them.

Right, but attacking the theory isn’t even what I’m concerned with — what I’m concerned with taking the theory to the point where it abolishes itself and becomes something else. That’s easier than arguing against it.

If someone holds X and you want to convince them that Y, you don’t need to argue that X is false and Y is true; you can also argue that X implies Y.

But if you’re using egoist justifications for your actions, your egoism hasn’t abolished itself, it’s just justifying different actions. Actual abolition of egoism would be if the justification of your actions weren’t ultimately grounded in your self-interest.

If X and Y cannot simultaneously be true, then in order to convince someone who believes X, you have to argue that X is false, and altruistic and egoist ethical theories are mutually exclusive.

I’m not using egoistic justifications for my actions; I’m using a different set of justifications, and then using egoistic justifications for those.

A similar line of attack against Cartesian-demon-style skepticism would be: “I can’t establish the existence of things outside my mind, but I can establish the existence of my internal mind-states, and I can observe that my mind cares about certain sets of things that my language gestures at with the phrase ‘the real world’. I can also establish that, if the Cartesian demon exists, I have no way of finding out whether or not the ‘real world’ is an illusion he created, at least until he ends it (if he is creating it) — so I may as well believe in the ‘real world’, so what difference does this form of skepticism make? Even if I buy it, it leads me to the conclusion that I should act as if I don’t.”

Of course, it’s possible to come up with stronger forms of skepticism to attack this.

“I derived my way out of ethical egoism a long time ago by realizing that certain sorts of altruistic behaviors were in my own interest (for both game-theoretic and psychological reasons”

As an egoist, I am skeptical. I guess that the “psychological reasons” are carrying most of the weight but are you sure they are sound? I mean, if you convince yourself that altruistic acts are good then of course you will find doing them psychologically appealing. If, on the other hand, you convince yourself that altruism is for suckers, then you won’t enjoy being altruistic. Since either judgement is completely subjective, they should be about equally easy to believe. Personally, I find believing “altruism is for suckers” to be a wonderful belief to hold. I get positive enjoyment out of not giving my money away (because it means I am not a sucker) and then I get positive enjoyment out of spending my money on myself. What altruist can say that?

“Yes, in order to found a family, to give life to posterity, the Dada, Gaga, Mama, Fafa, and Khakha must connect. Useless is mutual admiration, useless are plans and dreams, if there is an absence of at least one of these sexes; however, such a situation, alas, happens in life, and is called the Drama of the Quaternity, or Unhappy Love.” – Stanislaw Lem, The Star Diaries

Medical advice question! I have a large bleeding wound in the middle of my torso…

Just kidding.

I sometimes wonder how bad, from a utilitarian perspective, the Cretaceous-Tertiary extinction event was. Did the late cretaceous fauna suffer a lot, or were they killed off pretty quickly? How much did they lose compared to the lives they otherwise would have lived (perhaps including some moments of happiness, probably ending rather badly of infectious disease, predation, gangrenous wounds after fights with rivals, or whatever). Was the terror of an asteroid impact a significantly worse end than, say, being ripped apart by a t. rex?

The further down the food chain you are, the shorter time you’d survive in a dysfunctional conditon. My calculus looks at net good moments, after subtracting the number of bad moments from the number of good moments.* So a healthy life in a good environment, ending quickly in the mouth of a predator, could net out very positive. Loss or degradation of habitat caused by an asteroid impact would mean many bad moments, getting longer and worse, for years.

Didn’t mean to come off as an asteroid-apologist. I definitely agree that it was bad news. I was more thinking about whether any utilitarians have put much thought into how bad it was. (Compared to what? I’m not sure.)

It’s slightly tedious to fill out my name, email, and website each time I comment; it would be nice if I could just use my WordPress/Gravatar account. Is it a deliberate decision to disallow this? Is it actually enabled and I just have something configured weirdly?

Question about Less Wrong survey: do I understand correctly that even if I choose the “don’t keep my data private” option, it still doesn’t get published with my name/handle attached? Uncertainty about this is the reason I haven’t taken it yet. I’m confused what the point of the private option would be in such a case, as I imagine nobody fills out a survey just so that their data goes into your drawer and never sees the light.

The survey is taken through a Google Form and doesn’t touch your handle in any way. Some users would theoretically be identifiable through karma, but you can obfuscate that.

The point of the private option is that it allows Scott to do science to your survey results in aggregate with everyone else’s, even if he doesn’t include them in the giant public table of everyone’s results.

If you’re near Seattle wamt to celebrate all the good things about humanity – warmth, kindness, hope, progress – in the midst of our coldest, darkest nights, you should come to our Secular Solstice!

On December 13th, Seattle Effective Altruists will be hosting the Seattle Secular Solstice. Like the New York and Bay Area Solstice events, this will be an evening of song, dance, food, drink, warmth, kindness, reflection, community, and reaching for a brighter tomorrow.

You can look at them both and decide which one has questions that appeal to you more – the biggest differences are from part 6 and 7 of the SSC survey. Otherwise perhaps take the one from the site/community at which you spend more time, or at least that’s what I wish I had done.

Anyone have any advice for dealing with anxiety stemming from the belief in a B-theory of time? Currently whenever I am unhappy or uncomfortable in any way, the thought enters my mind that time does not pass and it isn’t even meaningful to talk about things getting better. This spirals off into a big lump of loopiness where I panic about how I’m going to be suffering “forever” (to the extent that that even has any meaning). I’ve considered speaking to a therapist, but this strikes me as a possible ideohazard. I only post it here because A) most if not all SSC readers have already heard of it and B) I can make use of content warnings.

I’m also looking for generally for advice on handling ideohazards in general. My asshole of a brain keeps coming up with stuff like this (I only recently got over a bout of solipsism-related depression, the details of which aren’t all that important). I worry both that therapists will feel obligated to listen to them if I bring them up and that they won’t take the risk seriously. I want anyone I talk to to make an informed and free decision as to whether or not to help me. I’ve thought about posting here and over at less wrong, but I don’t want to try to turn either place into my personal therapist’s office.

Sure, this description causes lots of recognition for me. I want to communicate that as distressful as the direct lived sensation of the thoughts you are describing can be, in my experience, from a broader perspective such insight is critically valuable — many people can go through life without fully entering the present in the way you appear to be.

I’m having trouble finding a good definition of “ideohazard” but I suspect it might go something like “memeplex that could cause suffering upon transmission”. If so, this seems precisely what professional therapists are prepared to handle, in that they would generally have more than the average amount of experience encountering, cataloging, and navigating them in the past. As I understand it, therapists are expected to recognize situations they cannot handle, and respond appropriately. Your primary responsibility should be to find someone who can help you safely explore the sensations you are experiencing. As professionals, it is their responsibility to evaluate their own capacity.

I can’t speak to the cultural norms of either forum you mention with any authority, but the process of explicitly writing out your thoughts could be valuable in itself, and from your teaser I suspect I personally would find the results quite interesting to read.

None of these sounds like ideohazards for the general public. A therapist may not understand the issue, but it’s extremely unlikely that it will cause them any distress. Neurotypicals just don’t worry about that sort of thing.

A good therapist will just nod and ask you questions that help figure out why that thought is so distressing to you and why thoughts like that in general are so threatening for you. You might need to try a couple different therapists to find one who’s good.

(Standard disclaimer: I’m not a doctor, and the main thrust of this message is telling you to go see one!)

I had a good therapist, but I graduated college and now need a new one. The ideohazard stuff started after graduation so I haven’t had an opportunity to really talk about it. The consensus here seems to be that the things I’m worried about aren’t as “dangerous” as I feared, though, which is encouraging.

I think a good therapist would help a lot. On the other hand, I’ve found that middle-of-the-road NT therapists often just do not get people like us. So proceed with caution.

I have no idea if a competing idea will be helpful, but consider: The B-theories of time require a God’s-eye-view to make sense. Sure, the physicist can write down the timeless equations. She can think in those terms. But she does not experience life that way. She cannot. None of us can. We are temporal beings. Our brains are things in space-time.

So even if B-theory is true, with some atemporal god watching us with its atemporal mind — it sees your hard-times fixed there in space-time — so what? You are not that god.

“On the other hand, I’ve found that middle-of-the-road NT therapists often just do not get people like us.”

I’ve had the best luck with cognitive therapists who have a mystical bent to their practice- not a “crystals and woo” type of mysticism, but an older doctor who practices a form of fairly austere mysticism, like centering prayer or Zen. They often understand those of us who think about big, existential concerns- because they’re the same way.

I think the belief that time doesn’t pass might be a philosophical version of a basic human experience (I *think* it’s a basic human experience) that it’s impossible to vividly imagine one emotional when you’re experiencing a different emotion.

My notion is that when you’re unhappy, you can’t imagine feeling better, and then (since you have a philosophical turn of mind) you get stuck on the idea of being trapped in the moment.

1. A lot of therapists aren’t going to be able to engage with worries about intellectual issues. You might want to talk to a friend about this.

2. Not everybody finds the B-theory of time horrifying. (For example, I find it comforting, in the sense of “all our dead loved ones still ‘exist’ in the past.”) How you *feel* about an idea might change with ordinary psychological changes (growing older, getting more sleep, therapy or meds) independent of the content of the idea.

Loopy disturbing thoughts can be a “symptom” of some sort of mental disorder independent of the content of those thoughts. I tend to think of mental abnormalities as sort of “the weights on the variables”. People can be aware of the same facts, but a depressive will have a tendency to put an unhappy spin on the facts, a schizophrenic might tend to see improbable patterns in the facts, etc. The style, the flavor of your thinking is affected, even if you’re not strictly speaking believing any definite falsehoods.

IMO, argument and reason can help you correct your beliefs, but only stumbling into a different mental state (by exposing yourself to different experiences, or changing your physiological setup) can shift the “flavor” of your thinking.

1. That’s another significant concern. A lot of what I worry about is the big intellectual stuff that the rationalist community likes to discuss. Problem is, many of my IRL friends aren’t into that to the same degree, and as I said in my OP I don’t want to try to co-opt LW/SSC into my own personal therapy session. I suppose this may come down to trial and error, but there are some pretty huge inferential gaps that need to be crossed in order to even begin addressing the issues I’m having. I had mostly crossed that gap with a previous therapist, but now it’s arguably gotten much larger and I’m going to be starting from scratch, which means it may take a while to figure out whether I should stick with a given therapist.

2. A lot of the problem is that it potentially invalidates the entire class of “I know you’ve suffered/are suffering greatly, but it’s all going to get better” type advice. I realize this may be somewhat open to interpretation, but those types of statements do mostly seem to depend on an A-theory of time.

What Luke said above. I’d put it this way: just because my thumb hurts, having just hit it with a hammer, doesn’t mean my whole body hurts or is likely to soon hurt. (By analogy if my past hurts, my future may still be OK.) If no additional medical care is needed, the pain in my thumb needs to get off my agenda.

But maybe additional medical care is needed. Maybe your “asshole brain” is slyly/indirectly accusing you of being in denial about past or present problems. Maybe it’s trying to get you to face up to them, digest them, and respond. This is just a wild stab in the dark and please don’t be offended if I’m way off base, which I probably am.

Luke’s point, and my thumb/body analogy, is arguing with the belief. Maybe that helps, maybe that doesn’t. If not, then: What Sarah said above. Maybe you need to shift the “flavor” of your thinking so your brain won’t be an asshole to you. Therapy might help for that, and if that’s your strategy then the therapist doesn’t need to understand relativistic physics.

The controversial aspect is that it’s drawing two big boxes, one of which has >99% of the population in it and the other which has <1%, using the language of equalizing power / representation between groups. Basic arithmetic should tell you that this means an enormous transfer of power to a very small and very new clique.

It's the same powerplay behind the hetero/homo 97%/3% split, where decades of overrepresentation in media and politics have convinced the average American there are more than twice as many gays as blacks.

To put it in LW terms it's exploiting anchoring to make an insignificant demographic seem politically relevant.

Is the media responsible for people overestimating the number of gays? It seems that thinking the number is higher than 3% could be just another instance of people’s tendency to overestimate the likelyhood of unlikely things, and (at least for whites) thinking that gays are more common than blacks could be a result of much greater segregation between blacks and whites than between gays and straights (anecdotally, as a white guy, I’m pretty sure I personally know more gays than blacks).

It’s certainly possible, though it seems suspiciously convenient that the meteoric rise of gay power just happened to coincide with their sudden ubiquity in media.

In terms of segregation I couldn’t say. Living in the city I had much more daily experience with blacks than gays, especially since the latter aren’t too visible outside of Pride parades and the Village. Maybe it’s different in the suburbs.

> Basic arithmetic should tell you that this means an enormous transfer of power to a very small and very new clique.

It is a transfer of power to a small clique. I’m somewhat thrown by your claim that the transfer is enormous or that the population is new (their recognition is fairly new). What it definitely is not, however, is an attempt to give them *excess* power. Since they are very few, and do not end up with excess power, the transfer cannot be enormous on the whole.

If you think that they DO have excess power, that’s a separate claim you’ll have to justify.

I went with a conventional identification, because although I don’t hugely strongly in every detail identify with it, on the other hand I don’t not identify with it so much that selecting “other” would have been honest, made sense, or been helpful.

But gender and sexual orientation are a minefield; if you were to put in all the possible variables, it would take up more room than the survey, and whatever you select, some malcontent will disagree – for instance, I picked “asexual” (or at least I’m on that spectrum) but I don’t consider that to be a whole separate different orientation from “heterosexual”, which I also am; rather, they are two aspects, or one is the way I express (or not) the other.

For “preferred relationship style”, I really would have appreciated the option of “none” (I’m aromantic – definitely aromantic, not ‘on the spectrum’ as with asexuality – as well).

I’m new to this blog (and love it!) so I’m not sure whether this is the spot for requests, but if you’re looking for a topic for a future blog post, I’d love to hear your thoughts on the validity of ADD/ADHD as a disorder.

I’m originally from STEM-land, which is a place I know my way around. You need partial differential equations before fluid dynamics. You need Intermediate X before Advanced Topics in X. I don’t know if the apparent ease of navigating STEM-land is because it’s an intrinsically orderly place, or if it’s because I know it well.

More recently I’ve been trying to get to grips with philosophy. I’ve read several introductory texts, but there doesn’t seem to be any obvious ladder of progression. There’s no “Intermediate Ethics”. I can’t find “Informal Logic II”.

I’m not running out of ways to develop these areas, since “Intro to PhilosoWhatever” tends to spill over to other slightly more orderly subjects, like linguistics or law, but there does seem to be this absent body of material that I’d expect to be there. Is this just what the subject is like, or have I got a scrambled view of it?

I don’t think there is an “Informal Logic II;” more advanced forms of reasoning are, I think, inevitably either more specialized or more formalized or both. In other respects, which order is going to be most beneficial is controversial and probably varies from individual to individual (I think reading Nietzsche helped me understand Quine better, but that’s not at all a standard manner of proceeding, and I’m not even sure it would work for others). But since you specifically mention ethics, an underrated classic that you might like (if you like “orderly subjects” like linguistics) is C. L. Stevenson’s Ethics and Language.

Try looking through a couple colleges’ course catalogs; you’ll find some regularities. There is a map, but most people are unwilling to articulate it explicitly and/or refine it. Plus it would be argued to death because philosophy. These things aren’t intrinsic to the field, but they’re fairly predictable given the cultural situation with the humanities and sciences at the moment.

Speaking of culture: I hope there’s a way I can ask this question without too much presumption, but how likely would you be to try and “get to grips” with physics on your own? We’re much more comfortable accepting learned authority in STEM, which bears interrogating.

I hope there’s a way I can ask this question without too much presumption, but how likely would you be to try and “get to grips” with physics on your own?

Depends on what you mean by “on your own”, but I would thoroughly expect to be able to pick up a self-contained physics textbook, for which I meet the prerequisites, and work my way through it without supervision or guidance. I’m not entirely sure if this is addressing your query, though.

As with most fields, analytic philosophy is defined to be the set of things studied by analytic philosophers. That is, it’s surprisingly socially constructed. Some analytic philosophers publish in Stats/Machine Learning journals.

Philosophy is much less organized than STEM fields in the way you describe. To get a general idea of what people study, you could check out the Philosophical Gourmet Report’s breakdown of specializations: http://www.philosophicalgourmet.com/breakdown.asp

If you want to learn more about a particular specialization, say meta-ethics, see if the Stanford Encyclopedia of Philosophy has an entry on it (see, e.g., http://plato.stanford.edu/entries/metaethics/). If something in particular mentioned in the SEP entry looks interesting, you should be able to find ample references in the entry’s bibliography. The SEP also has excellent articles in general on particular thinkers or positions in philosophy.

There are also many good handbooks and the like in subfields of philosophy, e.g., the Contemporary Debates in X series.

I’m a little confused by the fact that, a few months ago, whenever I left a comment I could check a box saying that I wanted to be notified of new comments by e-mail (another one for new posts), but now I don’t see this option.

While I can’t comment on rhetorical techniques vis a vis the z-axis, I’d like to third that sentiment. It’s a lot like the feeling you get when reading the Sequences, really.

I think some of that can be ascribed to how much of those posts are ordering, more than anything. Everyone who reads Scott’s stuff on The Worst Argument In The World has encountered that argument, but I think a relatively small population will have recognized that it is a discrete and pervasive thing. Much the same with motte-and-bailey.

I was rereading your post “Nerds Can Be Bees, Too” the other day and saw that you have auditory processing issues. I do too. And since you are the only person I know with apd (if you have it), who also just happens to be very smart and a psychiatrist, I was wondering if you could expand on your experience a little and how it has influenced your personality. For instance, one doctor described clutterers, another language disorder, as having a holistic worldview.

“Weiss claimed that Battaros, Demosthenes, Pericles, Justinian, Otto von Bismarck, and Winston Churchill were clutterers. He says about these people, “Each of these contributors to world history viewed his world holistically, and was not deflected by exaggerated attention to small details. Perhaps then, they excelled because of, rather than in spite of, their [cluttering].”

On the SSC survey, I was surprised to see natural law ethics broken out as separate from virtue ethics, since there’s often a fair amount of overlap there. I don’t necessarily disagree with that move, I just thought it was an interesting choice.

Unrelatedly–there were a few moments (e.g., “almost done with this section!”; “choice for non-Americans who want a choice”) when Scott’s personality and writerly voice delightfully showed forth even in the restricted writing genre of “survey,” which made it surprisingly fun to take.

I just took the survey ten minutes ago, and answered that I’ve never commented here; that’s a pretty short window of time for my answers to have been true. Well, so it goes. I was surprised by the question that asked for SAT scores without a corresponding question on ACT scores; it’s my understanding that a roughly equivalent number of high school students take each. My high school (Wyoming, 1997) offered only the ACT, which was common in that geographical region.

And for those of us who’ve never taken SAT or ACT or anything like it, another question to be skipped over. I mean, I could probably dig out my Leaving Certificate results from 34 years ago and convert the grades into the points used for qualification for third level education in this country, but what would that tell you?

Hm – giving myself the benefit of the doubt on some grades (because they’ve completely changed the grading system since my day) and with a bit of finger-crossing, I score 390 points which would actually be enough to get me into some STEM courses – at Institutes of Technology, which are really jumped-up Regional Technical Colleges, which is where I did my poor little Diploma in Biotechnology (one year follow-on course after the two year Certificate in Biology) all those years ago after my Leaving Cert, so that at least is fairly accurate to my level 🙂

I’ve never had an IQ test of any kind, professional or otherwise. Am I really so unusual, as far as the rest of you on here go?

I don’t know, I think taking an IQ test might be useful for certain people, in the same sense that reading Ayn Rand might be useful for certain people. There exists a whole class of chronically underconfident smart people who have absorbed a bunch of societal messages designed specifically to discourage the overconfident majority – for example, I’ve always had it endlessly drilled it into my head that making a living writing fiction is essentially impossible. “Only X% of people ever get published, the average author only makes $Y a year, people like Stephen King are a ridiculous anomaly, etc.” Over and over I heard this. Apparently this is a necessary message for a lot of people? For myself I had fully gotten the picture by the time I was like 8, and as a result it would never have even occurred to me to try writing/publishing a novel. This despite the fact that (judging from stories I’ve heard about slush piles) I might well have a better chance of getting published than many people out there who are trying. So for me anyway, finding out [a rough estimate of] my IQ has given me more confidence to try things I might not have otherwise because society had always (repeatedly, ceaselessly) told me that it was pointless. It was like: okay, you’re actually pretty smart, maybe cautionary messages apply a little less to you than you thought.

“What benefit does it give you, save for showing off, which is distasteful?”

One in some cases potentially important benefit is that the result of a test may give you the opportunity to explore new social options, in that a ‘good result’ may qualify you for membership of a high-IQ society such as Mensa. I took a test last year for that express purpose. Presumably more than a few smart people reading a blog like this one may be socially isolated to some extent and have a hard time finding people around them who’re interested in the same things they are (or perhaps I’m just projecting – but I don’t think I am). Mensa is to a significant extent a social organization, at least in Denmark.

In my own case I’ll admit that I have not been particularly impressed by most of the members I’ve met, and so I have not participated in Mensa events for a while, but as a severely socially isolated individual I am glad that I explored this option, and I know that I may reconsider in the future and that this is a social option which is available to me.

So in short, if you lack a social support network I’m pretty sure Mensa may sometimes be helpful, and membership of Mensa requires you to get a high IQ score (or something equivalent). Of course one may argue that it’s the membership and the events and/or friendships established, not the result of the test, which are important, but you don’t get access without the test result.

Oh, and I should note that I very much agree with your view that bragging about a high IQ (or for that matter anything else) is distasteful.

How many people have had IQ tests, and why would they bother? The schools I was in certainly never tested their students for IQ, not even those who were selected for talented-and-gifted programs. I’ve done online tests for my own amusement, but those are specifically excluded by the survey question.

Don’t those use SAT/ACT scores? Mine did. I only got IQ-tested because the school system was convinced that I had whatever mental disorder was easiest to conjure up a false positive for at the time and getting IQ-tested was part of whatever process.

Nydwracu, you are probably thinking of magnet schools. Gifted and talented programs usually means the student is in the regular classes, but gets pulled out a few hours each week for supplementation. A key difference is that admission is year-round, when the parents nag enough, or when the teacher declares the student bored. So test scheduled only a few times a year don’t fit.

We had a “science and technology program” inside a regular school. People tested into that, but with a program-specific test, and I don’t think the results were even released — either you got in or you didn’t.

This is interesting, though; America seems (on the limited sample here) to be much more hung up about IQ and tests and IQ tests than the rest of us.

I’m constantly fascinated by the difference between the self-image of the U.S.A. as the land of individualism and non-conformity, and the reality of how conformist it really is. Some of that probably is from the “nation of immigrants” thing; trying to melt everyone down into a ‘model citizen’ by such things as the Pledge of Allegiance.

But really, for the romantic image of pioneers striking out ever westward into the unknown and uncharted, you people seem to adore (and I’m nearly using that in the religious sense) numbers and statistics and classifications.

I never in my living life heard about “percentiles”, for instance, until running across the term on various American blogs about this, that and the other.

As for the baseball score thing?? A formula to calculate a result to three decimal places so you can say X is a better player than Y, look at the figures???

Reply to Nornagest: actually, like Deiseach, I too have gotten the impression that Americans seem more likely to know (and care about?) their IQ than people in, for example, my country (England). I don’t think I’ve ever come across anyone here who knows their IQ score, especially from a professionally administered test.

I do agree, though, that some of the armchair anthropology which follows goes a bit too far. (In particular, that percentiles are a uniquely American concept seems like a fairly silly claim. I come across percentiles all the time and have done for as long as I can remember. The concept is so fundamental that it’s hard for me to imagine going without it.)

On the survey, I appreciate that you prefer numbers for the various academic-achievement metrics, but they mean you’re not going to get any information from non-US readers.

Is there a habit among Americans of taking professionally-administered IQ tests at high school or university? Or are you expecting a substantial portion of your commentators to have spent the money in order to get a professional to administer an IQ test in search of a number which is (among everyone I know …) generally considered both meaningless and in the highest degree of bad taste to consider mentioning.

It’s uncommon for professionally-administered IQ tests to be given at high school or university in the US, but in my generation (born in the early Eighties) it was fairly common at lower grade levels as part of various states’ admittedly half-assed assessment programs for gifted students. Mine was administered around, IIRC, second grade (or age seven).

Standardized tests may have since crept into this niche, I don’t know. I started taking a lot of standardized tests somewhere around junior high (age twelveish), which roughly tracks when the federal government started getting excited about them.

My experience was similar–born in the U.S. in the late seventies, sent for a professionally administered IQ test in elementary school. (My sister had her IQ tested the same day. Our scores differed by two points, so my mom kept mum about them since she didn’t want either of us to think the other was smarter/less smart. My sister finally found the score sheets years later, in a box in the basement.)

Briefly, I feel that part of the American educational system is geared toward identifying gifted students and then doing nothing much with them. The problem’s probably most acute at the junior high level — high schoolers at least have electives and AP classes, and the social knock-on effects are less nasty in elementary — but at no level does it rise above “barely acceptable”.

In the era I was discussing, even the identification part was pretty noisy.

RE the survey – In American English does “room-mate” mean someone that you share a bedroom with – or is it anyone that you live with who is neither a relative nor a partner, regardless of whether you have separate rooms? I was confused as my living situation (me and two friends living in a three bed house) didn’t seem to fit any of the categories as defined by U.K. English.

It’s someone you live with (regardless of sharing a room) who you have a certain social relationship of equals with. For instance, if you’re subletting a room in someone’s house, I probably won’t consider you to be roommates with the landlord even if he lives there unless it’s explicitly some sort of hippie/co-op arrangement, but I might consider you to be roommates with someone who is subletting another room in that house.

Sometime I call her my tenant, depending on the context. I don’t usually call her my flatmate or my lodger, because those terms aren’t part of my vernacular 🙂 (Also, it’s a house, rather than a flat, so would flatmate even be accurate?)

I very much enjoyed going to the meetup; Thank you for hosting. Everyone was nice, and it increased my level of goodwill for this community. I’m sorry that Ozy wasn’t able to participate most of the evening. I hope they’re feeling better.

Last year’s LessWrong Survey said:

If you are reading this post, and have not been sent here by some sort of conspiracy trying to throw off the survey results, then you are the target population for the Less Wrong Census/Survey. Please take it. Doesn’t matter if you don’t post much. Doesn’t matter if you’re a lurker. Take the survey.

This Year’s Says:

If you are reading this post and self-identify as a LWer, then you are the target population for the Less Wrong Census/Survey. Please take it. Doesn’t matter if you don’t post much. Doesn’t matter if you’re a lurker. Take the survey.

I felt that last year, it was clearly correct that I should take it, given my LW-curious position. The opening language for this year’s survey implied that I should not, given the same position. Odd, especially given that my pro-LW feeling have increased.

Given that I went to the meetup, I intend to take it. Before I went I had figured that I shouldn’t. Identity is hard. Also, I keep my identity small, meaning if you ask me if I self-identify as X, the answer is probably no.

After the last open thread (in which I asked what can I do to become more active online in rationalist endeavors), I resolved to post an idea in each bimonthly open thread whenever I could, and ask whether it has been covered in LW and/or what LW terminology would be used to describe it. So here goes.

I’ve noticed a prevalent cognitive bias that can be described as follows. It is too often assumed that a phenomenon X will result in only positive (or only negative) effects on a group/individual A. Actually, unless X is consciously designed to be beneficial (or detrimental) to A, its effects on A are unlikely to consistently maximize (or minimize) A’s overall well-being. In fact, such assumptions often seem to take a naive approach to calculating the effects of X on A’s well-being, and there is no reason to think such phenomena are going to result in overtly “good” or “bad” things for A in the first place. This is similar to (although not quite the same as) the fallacy of Ascribing Human Purpose To Natural Events.

This type of fallacious thinking seems prevalent in certain types of religious mindsets; for example: “Clearly the devil brought on this bad weather in order to disrupt our religious service!” Here X is the bad weather, and A represents the local community, and it is assumed that X can only result in detriment for A (and therefore X is blamed on the devil). This overlooks the fact that X may be bringing much-needed moisture for the local flora, or X may be impeding some members of A from some other activity which might be spiritually harmful, etc., because X is simply a meterological phenomenon that has no concern whatsoever for the well-being of A. And the devil could come up with some much more refined method of ruining the religious service which avoids the possibility of such benefits to the community. (Note that I’m not pointing to belief in the devil as directly resulting from this fallacy, but rather, the assumption that creating this storm is what the devil would do in this situation.)

Again, this is a half-baked explanation of a type of thinking I see in many contexts which has bothered me for a while. I find it hard to think of many clear-cut examples, but if I get the chance later, I might post about this and include a different one in Ozy’s open thread.

Reminds me of the Halo/Horns Effect – this rain is interrupting our service, and that’s bad, so clearly rain is maximally bad in every way.

More generally, especially in internet discussions, it might be considered a heuristic for speeding up arguments – people just assign “good”or “bad” relationships between a policy and a group and leave it at that, presumably in the belief that other effects are weak enough that they won’t change the analysis.

Yeah, the “bad weather disrupting the event” scenario is sort of an extreme example, but I chose it because I figured it would be completely non-controversial within this group. I really notice this kind of thing in discussions of the relationships between certain policies and certain groups. But most of those examples could get sort of dicey. I mean, the common implication that anti-Israel policies are entirely detrimental to Jewish interests everywhere is a fairly clear-cut example, I guess.

I’d like to start blogging but i don’t really know the best way to host one. I know how to program and want something I can mess with but I don’t want to reinvent the wheel or have to touch HTML/CSS/JavaScript/PHP more that i have to.

One good blog engine for programmers is Octopress. I haven’t used it, but I’ve seen a lot of blogs based on it, and it has a good theme and working code syntax highlighting. It is built on Jekyll, a static site generator, meaning you edit your blog by editing Markdown files on your computer, then use the terminal to regenerate your HTML files, and finally re-upload those static HTML files to your webserver. Static sites load faster and have fewer security holes than blog engines that are web apps. Octopress provides default HTML and CSS templates, so I think you only need to know how to write Markdown, and have a web host.

Hosting your blog is easy with a static site, because you don’t have to find a host that supports PHP or Rails or whatever, or pay extra for a plan with those features. The host I use for my very basic home page is ControlVM – I chose it because it is the cheapest host I could find that supports PHP. I think I pay about $35/year for both the hosting and the domain registration with them. Another host that would be cheap if your blog is not popular is NearlyFreeSpeech.Net, which has pay-for-what-you-use billing. Some of those hosts also offer to register an internet domain for you, but if you choose one that doesn’t, I’ve heard Namecheap recommended.

If you can use Git, you might want to use GitHub Pages as free hosting, and configure a CNAME record to point your domain at GitHub’s version of the website (GitHub’s help explains how to do this).

The main limitation of static site generators is that if you want to allow visitor comments, they must be done all through JavaScript, such as with Disqus. If you want comments to be stored on and served from your server, you will need a server-side blog engine such as WordPress or Ghost.

Is the redundancy in the gender question deliberate? (Sex assigned at birth; then also distinguishing between cis male, trans male, cis female and trans female.) You could ask for sex assigned at birth and current identified gender. (I’m cis, so I don’t have a personal stake in this, but it just jumped out at me from a survey design POV.)

The family-religion question is unusual in its long-term historical perspective, as opposed to asking what religion the family you actually grew up in practised. I’m a Christian, and my parents are atheists and brought me up without religion (although I encountered it at school), but AFAIK my ancestors were Christian, so I ticked that. I feel irrationally irritated that the survey doesn’t distinguish between me and someone who’s Christian just because they were brought up that way.

My IQ is from an official test when I was 15, but I fear it’s dropped since.

I didn’t answer the income and charity questions because I technically have zero income, but in practice we have a joint income as a couple and make shared decisions about spending and charitable donations, so I’d have quite liked some way to indicate those. Hopefully my husband will take the survey too and you’ll get the data that way.

I would have liked to do the digit-ratio question because it’s interesting, but I was realistically never going to get around to it, so I decided better to do the survey without it than not at all.

I’m the atheist child of Christian parents who were each the first Christians in their Jewish families; in my mother’s case her family had been secular Jews since the late 1800s. When I became an atheist in my early 30s, her whole family breathed a sigh of relief and stopped pretending they were religious in front of me.

Also, I’m a student with no income NOW, after having emigrated to Ireland with my husband, but last year I was a corporate slave making some 60K a year before the layoff, which explains why my current income is 0 and I gave 2000 to charity. I’m not sure what the heck I’m going to do about my taxes this year.

Hey speed, did you answer survey questions with your joint income/donations (with spouse) or individual? I went for joint, because we don’t keep our finances separate enough to allocate donations to his vs hers. Whatever, I guess – no survey can anticipate everything, and it was long enough already.

Use Web2FB2. You can download 10 articles at a time and read the resulting ePub files using Moon+. Scott has 346 articles so far, so you can get by with a mere 35 files; less if you skip things like meetup announcements.

Thanks. This is useful. Do you know of anything more simple? I had an iPhone app which allowed me to just enter the names of blogs I wanted to read and get as many entries as I wanted. I don’t know of something similar for Android.

I was looking at the survey and came to think of a question I want answered before I can complete the survey. Note that I’m only getting into philosophy, so this might be another very obvious thing to ask, but I’ll do it anyway :p

I’ve been reading up on consequentialism and deontology and the way they are presented they seem mutually exclussive, at least if they are viewed on the same level of decision making. For example, if I decide to go with deontology in general and act on a fixed set of rules, I cannot at the same time decide to strive for the most beneficial consequence on each occassion.

If viewed on different levels of decision making, though, it seems like the two could (and do, actually) work together really well. Say I devise some rules which I think might have the most favourable outcome (quite consequentialistic, I should think). If I now blindly stick to these rules in everyday life , to account for my own fickleness and time constraints, that would be a rather deontological thing to do, no?

Add to that various iterations of rules-review following real-world feedback and you have what I think most people are doing, anyway.

So my question(s) would be, how do you know if you’re a deontologist/consequentialist? What does it mean to be one or the other (do they strive to erase traces of the other from their decision making process)? And wouldn’t the most sensible thing to do to mix both?

I usually interpret such questions as referring to your lowest-level (closest to the bare metal) philosophy.

If the reason why you commit to following the rules is that you expect doing so to result in good outcomes, then you’re a consequentialist.

If the reason why you select rules according to their predicted consequences is because you believe that it’s virtuous to act according to the rules of consequentialism, then you’re a deontologist.

If the reason why you follow the rules is because rule-following is inherently virtuous, and the reason why you select rules according to their consequences is because you expect consequentialism to produce good outcomes, then you’re some sort of hybrid.

Would it be a reasonable use of terms to describe a personal system this way?

A. It is good for people to be fed and clothed, not to be made sad, etc. (Deon?)
B. So for particular actions, look at what the consequence/s will be. (Consceq?)
C. Using logic, practicality, and kindness. (Virt?)

How do terms like ‘utilitarian/utilitarianism’ fit with this sort of description? Would that name the system described here?

Your A is a conception of good consequences, not an ethical theory that can be classified as deontological/consequentialist/etc. Consequentialism just says “maximize good consequences”, what “good consequences” entails is a separate question. You could say that the consequences to be maximized is your own pleasure, or the fulfillment of people’s preferences, or the number of paperclips in the universe.

Sometimes the fault in not knowing how to answer a question lies in the respondent, and sometimes in the question. In this case, in the question. I agree pretty strongly with what you suggest tentatively – a “compatibilism”, if I may appropriate that term, regarding Consequence/Deontic/Virtue ethics. Not to deny that some philosophies can be pigeonholed pretty comfortably into one of these slots. But others – better ones, I think – can’t.

I have a question for future reference. If we want to talk about the general concept of slavery with no mention of race (e.g. whether one should be able to sell oneself into slavery), can that be done here, or should it be taken to Ozy’s thread?

Sure, in the long run we are all dead, if not from natural causes a few decades of now, then when our empire collapses a few hundred years from now, or when the sun becomes a red giant a few billion years from now, or when the universe runs out of negentropy a few trillion years from now. Does that mean we should refuse medical treatment, neglect to have children, not give a damn if our civilization ends, and forget about FAI? No, of course not.

I would actually argue yes, although that’s a long discussion for another time. It would be interesting to me if Scott considered doing a lengthy write-up arguing for antinatalism, perhaps tapping into some Schopenhauer or Ligotti. Even if he doesn’t actually empathize with the amtinatalist school of thought it would be an interesting intellectual exercise, similar to his NRx write-up.

I can understand why someone would find having children immoral, but why would you want to refuse medical treatment?

I think I might be an almost-borderline-antinatalist. I think creating minds in an imperfect universe is generally bad but is currently instrumentally necessary for some reasons:

1: Halting reproduction now would lead to the collapse of civilization which would lead to alot of suffering and early death.

2: Some people alive today might achieve biological immortality provided civilization does not collapse on them. This is really an extension of 1 but makes it even more important (though I’m not sure to just what extent I’d be willing to value potentially longer lived individuals over shorter lived ones)

3: If humanity went extinct now there would still be animal suffering and death, and eventually another sapient species could evolve to take our place.

4: apparently my utility function gets rather complex when it comes to measuring life years off against each other. creating a few billion people who have an (lets say) 85% chance of dying by 100 seems justified for a (lets say) 15% chance at indefinite lifespan for the rest of earths population, but I’m not sure at all at what point I’d stop.

Also for clarification, those few billion more people would get the indefinite lifespan also if it worked, its just that they wouldn’t exist in the other option, so preexistence cannot have their value added in.

Is there any reason that feminism and social justice are conflated on the survey? I might agree that social justice encompasses feminism, but I don’t think it works the other way around. (I’m not trying to start a race or gender discussion here, I’m trying to comment on the structure of the survey).

On the one side it’s hard to have social justice without caring about feminist rights (obviously).

On the other side you can’t fight for the right of all women if you don’t understand what other factors play into their oppression. It is often argued that a more narrow/concentrated approach to feminism in the past fought mostly for white, hetero, cis, able-bodied women instead of for all women.

That’s why intersectionality is a big thing now, invariably merging both movements to a big extent. Or to quote Kimberlé Crenshaw, who is much better at explaining than me:

“Cultural patterns of oppression are not only interrelated, but are bound together and influenced by the intersectional systems of society.”

First off, thanks, but ‘my’ answer was more or less just rationalwiki paraphrased, so yeah…

To (try and) answer your question, the full passage from which I quoted would be:
“[Intersectionality is] The view that women experience oppression in varying configurations and in varying degrees of intensity. Cultural patterns of oppression are not only interrelated, but are bound together and influenced by the intersectional systems of society. Examples of this include race, gender, class, ability, and ethnicity.”

So these ‘systems’ would be race, gender etc. And although to use the word ‘system’ is a bit strange there, it is also used in that context by others, like Patricia Hill Collins:

“[Patterns of oppression] are bound together and influenced by the intersectional systems of society, such as race, gender, class, and ethnicity.”

The survey says “feminist or a member of the social justice community”, which I interpret to mean to be an inclusive “or”, so a feminist that’s not a member of the social justice community should answer “Yes” to that question.

But yeah, it’s certainly possible to be a non-SJ and even an anti-SJ feminist, e.g. individualist feminism.

I started taking your survey but then closed the window. I don’t see my sex and interest in the opposite sex and lack of interest in sharing my wife with another man as a mere personal preference, like a taste for pumpkin spice. Kinda squicky, your survey.

Okay, this might be the first time I’ve used the word “trigger” in this context, but… you’re triggered by the idea that sociosexual conformity might be seen as optional? I can’t decide whether or not this is supposed to be tongue-in-cheek.

I can’t speak for Lesser Bull, but I feel threatened by all this choice.

ETA: Also, I’m a little confused by the idea that sociosexual conformity might be “optional”–I’m pretty sure that if you don’t think there should be central scripts that people have strong incentives to follow/buy into, you’re dismantling sociosexual conformity altogether. That’s how norms work.

Also, I’m a little confused by the idea that sociosexual conformity might be “optional”–I’m pretty sure that if you don’t think there should be central scripts that people have strong incentives to follow/buy into, you’re dismantling sociosexual conformity altogether.

From where I’m standing, it’s a little odd to refer to something as conformist or normative if heterodox options don’t exist. They might be rarer or socially discouraged in some way, but if there aren’t any then shunning them wouldn’t be evidence for compliance with sociological roles; it just wouldn’t be on the table.

An example: we can learn quite a bit about how members of a culture actually behaved (as opposed to what their ideals of behavior were) by looking at their legal codes. If we’re reading the rules of a monastic order that existed 1500 years ago, and we find that, say, accepting donations of food was punished by a public flogging, we can thereby infer that there were at least a few hungry monks who’d otherwise be keen to accept charity, and that this was considered harmful. (Records of actual punishments are even better, since they give us proportional data.) Now, if we don’t see a punishment listed for, say, atheism, what do we infer from that, and why?

Another example: if all your friends metabolized oxygen, would you do it too?

The way people talk about “relationship style” nowadays, including in Scott’s survey, is more like “I like vanilla ice cream and you like chocolate ice cream” than “I like to have a full-time job and do good by the people in my life and you like to smoke pot and occasionally mug people”. I’m not saying that polyamorous people are bad people (they’re not), but it’s not unreasonable to be miffed and/or alarmed at the implied equivalence when you think there are good reasons for one to be normalized over the other.

I’m glad I was able to provide you with an opportunity to conspicuously signal offense in this matter.

(I realize I sound mean, but if you had brought this up in a way like “X bothered me, wouldn’t Y have been a better phrasing?” rather than “HERE’S WHY I CLOSED YOUR SURVEY IN DISGUST AFTER THE FIRST QUESTION FOR REASONS YOU COULDN’T REALLY HAVE PREDICTED SO THERE” I would have been more inclined to treat it as a real request.)

B. Don’t apologize for sounding mean. You’ve been testier and less fair and generally less prototypically Scott-like in the last few days. And it’s been kinda entertaining, honestly. Not that I have an objection to prototypical Scottitude, or else I wouldn’t be here. Anyhow, I went ahead and completed the survey, just skipping the questions that assumed a moral equivalence that I object to. In so doing, I answered the question about your Amazon link in the negative (no, I’ve never noticed it), and went and found it. You note that you think its compatible with Smile. Can you or anyone else confirm that? Thinking that it wasn’t compatible with Smile is the main reason I haven’t tried to use it before.

I went ahead and completed the survey, just skipping the questions that assumed a moral equivalence that I object to.

You can say that you prefer to be monogamous without thinking that it’s morally equivalent to polyamory, just like you could say that you prefer to not murder, even though that’s clearly not morally equivalent to murdering.

To use your example, without implying that the deviancies favored in these precincts are equivalent to murder, if you had objections to murder, and you lived in an environment that had a murderer’s rights movements, that called opposition to murder bigoted, and argued that the choice to murder or not was just a preference, you might object to a poll question that asked if your preference was murderous or non-murderous. It would beg the question.

The murder example is not really working for me. It just highlights for me how misguided your priorities seem to be, that you think murder remotely makes sense as any kind of an analogy for these so-called “deviancies.” I suppose you are just trying to signal how strongly you feel about the issues, but it seems certain to get in the way of any possibility of productive discussion.

Protagoras, you’re taking the analogy too literally. Pick a value of X such that X is a deviation from a norm that you think is important and worthwhile. The most obvious choice is murder, because that’s everyone’s go-to example of a morally wrong action. But you really can pick anything. Now imagine a survey posing the question, totally non-judgmentally, of whether you prefer X. It should smart.

It took me a while to get past my disagreement, but Lesser Bull does have a bit of a point: the term ‘preferred’ does have some implications about the question. A radically poly advocate who believed monogamy is inherently abusive could share the objection.

(Not sure what the hell you want regarding gender and orientation, though)

“Would you prefer inferno or non-inferno? Ha! Just kidding, it’s all inferno.”

Jokes aside, though, I don’t really see what the problem is. Just select “non-murder” and move on. It does make explicit that you’re among people that don’t share all of your values, of course, but if you’re e.g. a Christian traditionalist among poly folks and you haven’t figured that out yet, you haven’t been paying attention.

No, anonymous, I can’t really imagine a question of that form that would smart. Do I prefer people to think rationally put their faith in tradition? I prefer rationality. No sting. Closer to the murder example, do I prefer people to engage in reasonable discussion or resolve conflicts with lethal violence? I prefer rational discussion. No sting. I could keep looking for examples, but I think I just don’t get this one. On the other hand, I remain offended by comparing murder to non-mainstream romantic preferences, no matter how non-serious I keep getting assured the comparison is.

In all seriousness, though, I’m miffed at the implication that comparing two things in some respect implies that they’re identical in all respects. It’s not that the comparison is not “serious”, it’s that the person isn’t saying, “X is like murder, now you wouldn’t like to see a survey asking you if you prefer non-murder, would you?” They’re saying, “You seem to think that my reaction to X is unreasonable, given that I don’t think X ought to be normalized. Here’s something you presumably don’t want to see normalized–let’s pick the most obvious example, murder, so as to avoid getting bogged down in the details. Now wouldn’t you object to seeing murder normalized in this way?”

This is useful for the same reason that Nazi analogies are useful. I really don’t understand why everyone rails against them–are people copping to being way irrational and unable to abstract away from the particulars?

The first time Ozy hosted the Race & Gender thread, the top comment was a proposal for the elimination of men from the human species. Given that, what do you think would happen if, say, Multiheaded hosted it?

By the way, I’d just like to mention that those threads are becoming a great source of joy to me. Two of my favorite moments so far were:

– A frequent LW commenter who pretty much only ever talks about decision-theory stuff declares that he’s dipping his toe in the politics-water, and within a couple sentences is gesturing towards the merits of American racial genocide.

– A very aggressive, right-wing signaling man writes a long comment which includes denying Ozy xer choice of pronoun. Ozy replies by briefly describing how upsetting it is to xer when xe is misgendered; alpha man suddenly apologizes, with a heartfelt promise to be more courteous in the future.

Plus it’s always a relief when people can finally hoist their black flag and start gleefully ripping into the topics they haven’t been allowed to discuss in one place before. And now I’m reading Ozy’s awesome blog!

I don’t think I ever could have guessed that I’d be having a Happy Death Spiral around race and gender in the open thread.

The topic there seemed to assume that cryonics procedures at the time would not work and anti-aging research would not arrive before all the males die off of old age, so assuming that…

While actually killing all the men is obviously evil, as a male I don’t really have a strong reaction to “eliminating men” by just not making more to replace the old ones. I don’t think its a good idea (because it makes things worse for a lot of heterosexual women), but it doesn’t trigger any strong moral condemnation either.

Related: I’ve always found the people who call the dying out of their race by interbreeding “genocide” a little strange. Its not like anyone is actually being killed who wouldn’t die at the same time in the same way anyways.

Right — the implicit individualism betrays, at the very least, an ignorance of how humans generally think.

Escaping the horror of death through identification with a phyle, a usually-biologically-rooted institutional intelligence, allowing a sense of pseudo-immortality through continuity. Deyr fé, deyja frændr, deyr sjálfr it sama; en orðstírr deyr aldregi hveim er sér góðan getr. Deyr fé, deyja frændr, deyr sjálfr et sama; ek veit einn, at aldri deyr: dómr um dauðan hvern. Even leaving aside that institutional intelligences are (at least in some cases) arguably alive, the end of a culture is the end of a continuity and the end of each one of its members.

This isn’t commonly realized because the people who would otherwise be positioned to realize it and make it known usually identify with a phyle that roots itself not in biology, but in beliefs — and the room for compromise with the impulse toward specifically biological continuity that it allows is much lower (and much more specific; white people are barred from it almost completely) than in just about any other belief-rooted phyle. The concern for value-preservation that that crowd shows is rooted to some extent in that same impulse, I think; it’s just channeled in an odd enough direction that it isn’t recognized for what it is. The rationalist does not make a name for himself in a way that the reader of the Hávamál would recognize; he instead makes a name for himself by becoming a part of the process that he believes will let the values of his phyle reign forever. But the drive toward the immortality of the name is still there — and we should expect that a probably-millennia-old Indo-European drive (does it appear elsewhere? I don’t know, but I doubt it’s universal given the existence of alternative structures elsewhere, like Chinese ancestor-worship and the Yoruba(?) reincarnation stuff) would not die so easily.

A program of building status-dynamics that incentivize the destruction of a phyle still works toward the destruction of a phyle; a steelman of the “white genocide” stuff would run along precisely those lines. (While it is certainly a creative use of an ideograph, I see no reason to resort to white-magic mantras, especially since the logic behind them can easily be lost. It is enough to say that there is a desire for the destruction of certain continuities.)

(Of course, America is characterized by an already-existing confusion of phyles, and reconciling multiple potential phyletic identifications is about as difficult as reconciling multiple internalizations of perceived local value-systems — but even worse, since each potential identification permanently weakens every other one. This is probably a causal factor behind all manner of bizarre Americanisms, from identification with a supposedly monolithic and undifferentiated ‘white race’ to rejection of the whole thing in favor of a phyle that avoids the confused factors in favor of total identification with a memeplex — that is, an impulse toward religiosity, which can manifest in traditionally-religious or in new [progressive/rationalist/new-age/etc.] forms.)

Translation:
Cattle die,
kinsmen die,
the self dies likewise;
but the renown
for the one who gets good fame
dies never.

Cattle die,
kinsmen die,
the self dies likewise;
I know one thing
that never dies:
the repute of each of the dead.

I think people underestimate how much gets lost– even if your cultural continuity isn’t broken, what your culture meant by its words and symbols tends to shift.

Two thoughts about names and things:
When I was a kid I’d see “Otis elevators” on elevators, and I’d think “Hardly anyone knows anything about Otis as a person, but there’s his name on the elevators. It might matter for something, but how much?”

Lately, I was thinking about crane scissors. Someone must have come up with the idea of crane scissors. Their name is probably forgotten, but their design innovation keeps going on in many variations. It’s the sort of legacy I wouldn’t mind leaving, but I’m not sure how many people feel that way.

Oh, I’m well aware that most humans don’t think like me. Strange was perhaps the wrong word because I am the strange one. There really needs to be an adjective to mean “has alien value system relative to me”. I suppose that’s sort of what calling someone “evil” means but the connotations are completely different.

Meta comment:
In the footer, the text regarding the amazon affiliate program links to wordpress.org/ which is slightly weird. I understand that the text says “see amazon tab above for details”, not “click here for details” and that it’s customary for wordpress blogs to have a link to wordpress in the footer, but I think that under the circumstances, it’d be better not to have an href at all, or one linking to http://slatestarcodex.com/amazon/.

I’ve been wondering this for a while, but I’m not sure how people who value freedom as more than an instrumental value distinguish between increased freedom and one group gaining power over others. To that end, I invite everyone to discuss the below hypothetical dilemmas.

(Hypotheticals have been intentionally sanitized via Lovecraft, please don’t bring real world groups in out of respect for our host.)

1. The Cult of Great Cthulhu wants to conduct it’s dark rituals but are being disrupted by students from Miscatonic University’s Human Studies program. If ejected from their underground caverns the students instead picket outside the cave entrance and heckle cultists as they enter and leave. The students claim they have a right to protest and that the Cult’s pro-devouring teachings are antithetical to a free society. The cultists claim they have a right to worship freely.

2. A bill comes up before the Arkham county assembly that would require all children to attend local public schools. The motion is supported by MU professors who believe that refusing to teach children the scientifically accepted truth, that man degenerated from hyperborean spaceman over dark aeons, exclusively is indoctrination and against the rights of the child. It is opposed by Cthulhuist Warlocks who assert the freedom to raise their spawn in accordance with ancient teachings that man is the accident of indifferent fate.

3. A developer in Salem would like to build a large sprawl of shabby hovels connected by underground catacombs. The local residents know thus would likely attract Cultists to move in en masse and oppose the development gaining a permit on those grounds. The Cult is notorious for voting as a block to defund social services they don’t partake in, like garbage collection, and have social mores which ordinary Salemites consider oppressive and unpleasant.

So who, if anyone, is in the right in each example? If so is there is a violation of rights and how do you justify it other than ppreference for one side or the other?

When the examples are that close to famous talking points, it seems to me that transposing them into the Lovecraft universe is less akin to a true analogy and more like talking about reproductively viable worker ants in place of that group of which we dare not speak lest it descend from the indifferent stars and devour us all. But sure, I’ll bite.

In Example 1, analyzing the situation in terms of rights is unlikely to be helpful; the cultists are within their religious rights to eject heckling students from their services, and the students are within their civil rights to organize in protest. Nonetheless the situation is unstable, and allowing it to continue would be likely to lead to worse problems. If I were the Dean of Students at Miskatonic, I’d let the status quo ride for a couple weeks to see if either side got bored, and then I’d separate the sides.

In example 2, the legitimacy of forcing children to attend public schools touches on issues of the proper scope of government that go much deeper than a single point of doctrine. I’d rule in favor of the cultists, but on grounds of the local assembly overstepping its authority, not on religious freedom grounds. Freedom of conscience doesn’t extend to defining science, but the ability to step out of the system, e.g. by homeschooling, provides an important check on the government’s ability to do its own indoctrination.

Example 3 is a lot trickier and I don’t feel confident analyzing it without a lot more thought.

There’s a tradeoff between being close enough to reality to be useful and far enough from it to be inoffensive. I probably could have made them more distant without sacrificing too much but then again my main concern was satisfying the need to be courteous to Scott / not causing a serious flamewar.

Anyway the third one really is the stumper, though I’ve seen bitter fights on the first two as well. The real world controversy behind that one actually happened near me, when it wound up on This American Life I was pleasantly surprised.

First of all, I tend to see rights not as unbounded moral imperatives but as classes of situations where history has taught us that meddling tends to do more harm than good. Scenario 3 can be described in terms of property rights vs. community interest, as Blacktrance has downthread, but I don’t feel that that’s a very productive way of looking at it, partly because property rights are about the weakest of the rights we generally recognize and partly because all the consequential weight of the decision lies with parties other than the developer.

That pushes the decision into some extremely hairy territory with nasty precedents on both sides. I live in the SF Bay Area, so I know all about the bad stuff that can happen when too much power over development decisions lies with a body unresponsive to market pressures. But on the other hand, the existing residents — religiously bigoted though they may be — do have a legitimate interest in the management of their community, and if one option has predictable implications regarding its future management then I think it falls within the proper scope of government to act on that likelihood. (Social mores are another matter.)

I think the developer has a somewhat stronger case. But it’s not as cut-and-dried as the other two, and the best solution might be to allow the development but commit in some hard-to-reverse way to maintaining the desired level of public services.

1. seems to be a simple “your right to swing your fist stops at the edge of my nose” case. The students have a right to protest; We have to make a tradeoff about how much the student’s speech can interfere with the cultist’s worship. Telling the students to go protest in the designated “free speech zone” is clearly intended to render their speech powerless. However, some form of restrictions keeping them from blocking the cultist’s access to their grotto is probably called for.

1.5. To double down on this, suppose that 1% of the student protesters are taking pictures of the cultists, then posting those pictures to anti-cultist forums, figuring out who they are, and posting their names and addresses so the cultists are now subject to harassment in their daily lives. (Does this count as doxxing?) The majority of the students are innocent, but a fringe are verging on becoming a hate group. Is there a reasonable measure we can use to protect the cultists from personal harassment in their daily lives? I’m not cool with the police banning cameras at anti-cult demonstrations, but I have no idea how to protect the rights of the 99% of protesters to protest, while protecting the cultists from harassment by the 1%.

2. Previously, it seems warlocks were free to home-school their children in their “accident science”. Forcing them to send their children to schools where they will be taught the orthodox degeneration theory seems to be a clear limitation of their freedom. Unless Arkham’s schools are significantly different from other American schools, I don’t see how it can be legitimate to claim that forced school attendance is pro-rights of the child. That argument seems clearly false. I require additional convincing that this can be formulated as a question of the child’s right to a conventional education vs a parent’s right to choose an unconventional education for the child. Really, this looks like it’s about who gets to choose to ignore the child’s rights and enforce their will on the child.

(I have just Delta’d myself in favor of home schooling. Whargarbl!)

3. The value here is current community member’s rights to shape the kind of community they want to live in vs. a developers right to ignore the community values and impose changes, primarily for their own economic benefit. The rights of theoretical cultists who might move into such hovels are not really involved here. The actors are community vs developers. I have to side with the community here. The community is gross and bigoted, but their right to control their own neighborhood overrides the developers right to make a buck. This is a particularly pernicious example, because the developer can (correctly) point out the community’s anti-cultist bias, but as a pure rights question, they don’t have a leg to stand on.

It would be better, if the developer has purchased the land in question, for community members to buy it back. But the developer took an investment risk. Maybe they should have bought Florida swampland instead.

Summary:

1 is a balance of positive rights. 2 and 3 both are about powerful groups pretending to care about the rights of marginal groups so they can push around slightly-less powerful groups who have a much stronger rights claim. The unpleasantness of the warlock parents and bigoted community make me want them to lose, but I cannot make that a legitimate argument in terms of rights or freedoms.

I’ve heard lawyers who defend truly unpalatable clients call them werewolves. Even the werewolf needs the right to free speech. It’s only by defending the werewolves that we make sure that ordinary people are defended too, otherwise the law becomes a popularity contest. Similarly, We need to protect the freedoms of warlocks and bigoted townsfolk, despite the fact that we’d really rather they lost.

The whole point of banning me punching you in the nose is to render my punches powerless. Why should we not treat the coercive power of speech the same way?

Maybe it is important for democracy to force people to be aware of people with different viewpoints. But that merely suggests to me institutions such as forcing people through free-speech zones or political marches through neighborhoods, and does not suggest to me that people should be forced to hear picketers when they are visiting specific institutions. At most the picketers might be allowed to be visible off in the distance.

I’m an American, and relatively orthodox in this regard. The right to free speech has already been decided on. Part of that right is that it has been decided that we value a pluralistic society, and that vigorous public debate is either a good thing, or a cost we’ve decided we have to pay for the values we want to keep.

At this time, I am not interested in justifying these propositions, but I take them as a given. Given these values, the invention of a “free speech zone” is a deliberate attempt to undermine them. (And an Orwellian naming convention, as well.)

Given that we believe in the right to free speech, we must balance it with the right to be left alone. (If you don’t believe in the right to free speech, then it’s easy; Kick the protesters out.)

Do you see a different way to conceptualize this example that doesn’t make it a free speech question, or do you think free speech isn’t particularly valuable?

Special, you say that you want to take “free speech” as a given and not justify it, but it looks to me like your both your first and second comment try to justify it. At least, it looks to me like the aphorism in your first comment was an attempt to justify it, although, as I said, that aphorism seems to me to reject a right to protest. But your second comment’s insinuations about “a pluralistic society” and “vigorous public debate” definitely seems like such a justification. I explicitly rejected the second in my comment and specified what kind of imposed speech seemed relevant to me to to public debate. As for a pluralistic society, that seems to me to demand that people leave each other alone.

No, this is not at all about “freedom of speech”; this is about the “right to protest.” The right to protest may be orthodox in America, but conflating the two is not, and even if it were, I will not put up with such abuse of language.

As I said above, I can see an argument that democracy requires forcing political speech on unwilling hearers. But I can only see an argument for forcing it on the public as a whole, not on a narrowly target group; and this example is not political speech.

No one is proposing limits on the content expressed by the Miskatonic Human Studies department, nor on their right to associate in opposition to Cthulhuism. The sticking point is their place and manner of expression, and that’s a freedom of assembly question.

Freedom of assembly has historically been curtailed in the Anglosphere to a much greater extent than freedom of speech or association, and protests are very often broken up on shaky grounds, but presumably we want to stay on safe territory as far as civil rights are concerned. From that perspective, the modern consensus seems to be something along the lines of “people have the right to peacefully protest on public property, but not to the extent of abuse or harassment, and not to physically interfere in others’ legitimate pursuits”.

The exact point at which nonviolent protest starts to constitute abuse or harassment seems to me to be an open question. Taking pictures of Cthulhuist worshippers and sending calamari sauce to their home addresses is clearly over the line, but loud verbal heckling, waving blown-up pictures of the Dream of the Fisherman’s Wife, etc. all seems more ambiguous and I think you could make a principled argument either way. The “free speech zone” concept is an extreme solution to this sort of problem, but I’m not sure I’d call it an obviously illegitimate one, especially in an age when there are plenty of ways to get a message out that don’t involve an angry mob.

If I were good at explaining my political views, I’d be Scott. I shall, however, continue to fumble forward. 🙂

Nornagest is right that the actual concern here is about assembly, not speech. However, the same argument I’m trying to make applies: That the right to assembly must be balanced against the cultist’s right to be left alone. (and perhaps their freedom of religion, depending on how you want to carve it.)

Douglas, you make a strong point that _protest_ is not _speech_. We can send the students home and tell them to publish tracts on their blogs. I can steal Nornagest’s clarification and switch to the right to assembly, but that seems like a dirty trick.

Rather, I should fess up that I have, in fact, conflated protest and speech. My semi-justifications are a fumbling attempt to assert that protest should be considered speech and protected on that grounds. I still assert that this is an orthodox position, but I am not equipped to defend that assertion at this time. Is that conflation your true rejection? I’m sure some googling can find principled arguments for it, if need be, but you probably don’t want to argue with a google-parrot, so I’ll spare you that, unless this is the real sticking point.

It seems to me that the right to publicly protest is recognized and orthodox, (though, as Nornagest pointed out more limited than strict speech, and blurry around the edges.) Do you disagree that this is the case, or do you think that the status quo is wrong?

I’m trying to visualize what your suggested alternative is. I imagine that the police will disperse the protesters, and tell them to sign up for a slot on the 9pm political broadcast, which is mandatory viewing for everybody, so that they can rant to the general public about how cultists are destroying the moral framework of our society. I expect they’ll be preceded by a neo-nazi complaining that Jews are destroying the moral framework of our society, and followed by feminists complaining that men are destroying the moral framework of our society. I can see why they’d prefer the relative respectability of a public protest, but maybe I’m missing something. 😉 I’m reminded of the mandatory public access channel on cable.

(Todo: research arguments for why protest should be/not be a right, and see if I really believe that.)

No, conflation is not my true rejection. Conflation is an objection to you deriving your position from orthodoxy, but I myself do not care about orthodoxy.

There are several things you could mean by “orthodox.” The courts have strongly endorsed the right to protest. But I would not say that we as a society agreed to this. Certainly there is no consensus among individuals (though maybe among courts) of whether this right stems from speech or assembly. I’m pretty sure this is not what the Founding Fathers meant. I think freedom of speech was slipped into the Bill of Rights at the last moment, without much clarity of their intent; the previous emphasis being on freedom of press, which is much more clearly about content. The First Amendment does include “the right of the people peaceably to assemble, and to petition the Government for a redress of grievances” which is definitely a right to protest, but limited to protesting the government.

I was not seriously suggesting systematically forcing speech on the population as a whole. I was thinking of explicit coercion, but I was also thinking of protests in a busy downtown, chosen to that they can impose their protest on a big cross-section of people, rather than targeted on the narrow opposition. Also, I gave the example of political marches through residential neighborhoods, as a way of imposing politics on an audience of an intermediate specificity. I have more sympathy for those examples than for the highly targeted protest in the original example.

Contrary to what other commenters have said, I think 3 has the most obvious answer, and 1 and 2 are more difficult. The developer owns the property, and can do whatever he wants with it. If the local residents don’t like it, too bad for them.

I also think 3 has the most obvious solution: insufficient data. Under a semi-Lockean view of land rights (or at least the view that I remember being ascribed to Locke; I should reread this part of the Second Treatise to see if that’s accurate), you can’t abstract away from the traits of the competing groups. If the cultists have highly advanced technology and are part of a healthy civilization and the current inhabitants are lumpenproles who spend their days taking ultracrack and stabbing each other for fun, the cultists have the legitimate claim, because the lumpenproles can’t do anything but take up space. If the cultists don’t do anything but chant and build statues, and they want to absorb an average town and suck away all its resources to tile the place with statues, the townies have the legitimate claim.

* Are you still actively working on your utopian fictional society ideas? Is that an ongoing interest?
* Do you have any links to interesting/obscure utopian stuff?
* If people comment on old posts, will you be likely to read the post? Likely to reply?

1. Not really. Shireroth lost most of its people and got in a cycle of “nobody goes there, it’s too quiet”, which I am unfortunately perpetuating. I don’t have much more to add to my conception of Raikoth anyway at this point.

2. Freedom and Compassion’s post upthread is the most interesting thing I’ve seen in a while, and looks more like the direction I would try to take utopian thinking nowadays.

3. Comments on posts older than 30 days are banned (for spam purposes). Anything at around the 30 day limit, there’s about a 33% chance I’ll see.

It defintely seems like “communities of good” is a sensible way to progress – like your niceness/communities/civilization post I guess. My feeling is that such communities needs to be tied into some reasonably solid and hard-nosed economics to actually go anywhere. Except for the cooperative movement and whole Kibutz thing, which both seemed to be quite economically aware, I can’t think of many examples of utopian groups actually getting anywhere. My own tendancy is that we need to move in the direction of some kind of economics of goodness for ethical communities to progress…

Anyway on a more fictional theme I’d love to see you stay active on the utopian stuff as utopian+good writer=great readin’ 🙂

Does anyone have a non-commercial, layperson-accessible source on known (not suspected) effects of variation in the human microbiome? I’ve got a sci-fi half-idea growing in my head, and I want to feed it more information (not sale to see whether it grows to critical mass or dies under its own weight. Breeding plotbunnies is a terribly Darwinian process.
The idea seems to revolve around people gaining super-abilities by messing with their gut biomes, though some enhancements (and negative side effects) can be transmitted by saliva contact.

Hi Scott, been lurking here for several days (arrived via links to your posts on Neo-Reactionaries). I just wanted to say “thanks” for reintroducing me to someone I used to argue with on usenet 15 years ago. You’ve banned him but reading his comments here actually demonstrates an instance of “Cthulu swimming right” (if I understand the phrase correctly). Also have enjoyed several of your posts. Cheers.

I think I’ve mentioned the book in one of these comment threads quite recently (or someone else did, and I agreed it was a good book? Anyway…) and I’m not sure it’s completely what you’re asking for, but I still thought I should mention the book again here – if you have not already read Samir Okasha’s book Evolution and the Levels of Selection, I’d definitely recommend that book.

I may or may not have seen it. There have been a few discussions on group selection in the comments recently, and it’s likely that I didn’t see them all. Thanks for the recommendation, though. I’ll see if my library system has it.

James Donald is talking about SSC again. This time, he theorizes that Scott’s banning of the rightmost commenters has created a comment section which leans to the left of Scott. These leftist commenters then attack Scott whenever he makes a political post, which has caused him to make less of those posts.

It seems to be at least somewhat common for people to think physical continuity at some level is important for personal identity/nature of self stuff. Personally I do, I would not consider a sudden mind upload to be the same person as the original. Though I think I would be OK with slow neuron by neuron replacement. My feelings on this are somewhat unstable though and could change.

Anyways, I was thinking about this in relation to fictional afterlives (because I am working on a story which involves one that I may or may not try and write (badly) in the future). And it occurred to me that while people often question continuity of identity with Star Trek type transporters and mind uploading in fiction, they never question it in regards to the afterlife. I don’t think I have ever heard of anyone question the idea that souls in Heaven are the same people as the original. This seems strange to me? Possibly a result of differences in the audiences of science fiction vs fantasy? I will include stuff about this in my story if I ever write it (assuming my opinion does not change between then and now. I still might bring it up even if it does).

Brandon Sanderson (Mistborn, etc.) and Michael Swanwick (The Iron Dragon’s Daughter) are probably the most famous names in that space, but I think I was actually thinking of Pact (a web serial by Wildbow, perhaps better known for the superhero series Worm). The metaphysics there isn’t fully explained (yet, at least), but it clearly inherits from Enlightenment systems of ritual magic, and the protagonists often solve problems by figuring out some aspect of how it works or how it’s being used. It might be the only thing I’ve read that takes a decent stab at Goetic demonlogy.

The presence of your soul in the afterlife usually implies that your soul is the real source of your identity, and it just happens to reside in your body. If there are souls and an afterlife in a setting, rejection of physicalism implicitly goes along with it.

Most fantasy is non-reductionist. You are assumed to have an immaterial and irreducible soul, mind, or life-force, which is what is really you, and which currently inhabits your body, but which could be separated from your body and end up in the afterlife, or returned to your body for a resurrection, or even end up in another body through reincarnation or some kind of Freaky Friday spell. So there is no debate, because everything follows if you just accept the premise, which you do because of your willing suspension of disbelief.

Most science-fiction is nominally reductionist, and most science-fiction fans are materialists who don’t believe in souls, life-forces, or whatever, so we have to analyze is as we would real-life, and we start from the assumption that everything which a person is can be found in their body. If you are a pattern-identity theorist, this just means you are a program which is currently instantiated in your brain, and which could in principle be copied and instantiated in another brain, or read and instantiated in silicone, so there is no problem with transporters or uploading (the real headaches come when you contemplate things like running more than one copy of the program at the same time, or other high-level transhumanist themes). If you identify with your body/brain rather than with the pattern instantiated in your brain, or if you have hang-ups about continuity of consciousness, then transporters and uploading become problematic. Hence, debate.

I do not have a strong gender identity. I’m male (physiologically, and genetically), but mentally I’ve never really cared. I do not exhibit sterotypical male behaviour… or stereotypical female behaviour, for that matter. Sometimes I’d like to, but I’m afraid of the reactions I’d get, and I don’t care enough for it to matter. I’m an engineering type; I prefer working with machines, far more than people.

Four times out of five, someone who talks to me for long enough, assuming a text-only conversation, will spontaneously decide that I am female. I don’t usually bother to correct them. I’m not entirely sure they’re wrong.

I’m broken, in a lot of ways. As a child I always preferred playing with the girls, although that stopped soon enough (boy! icky!); I’ve never once enjoyed typical boy games, though there’s enough overlap that you wouldn’t notice. Instead, I retreated into books. Common enough, is it not?

One of my favorite hobbies is writing fanfiction. Hey, I know the statistics.

You can see where this is going, I suppose. My digit ratio is 1:0.97. Not that it means anything, the curves overlap too much to draw any conclusions, but it’s seeing that number that finally made me want to write this out.

I don’t identify as transgender—

I’ve learned not to identify with any gender.

But I can’t shake a nagging feeling that maybe, if I’d been a different sex, my life would have more people in it. Or I might be slightly more comfortable in my own skin. It’s something I’d like to try, sometime, once the technology is perfected and it’s fully reversible.

I can’t be sure, you see.

This post doesn’t have a point. I just wanted to write it down, somewhere.

Meta

Subscribe via Email

Email Address

Triplebyte is building an objective and empirically validated software engineering recruitment process. We don’t look at resumes, just at whether you can code. We’ve had great success helping SSC readers get jobs in the past. We invite you to test your skills and try our process!

AISafety.com hosts a Skype reading group Wednesdays at 19:45 UTC, reading new and old articles on different aspects of AI Safety. We start with a presentation of a summary of the article, and then discuss in a friendly atmosphere.

MealSquares is a "nutritionally complete" food that contains a balanced diet worth of nutrients in a few tasty easily measurable units. Think Soylent, except zero preparation, made with natural ingredients, and looks/tastes a lot like an ordinary scone.

Giving What We Can is a charitable movement promoting giving some of your money to the developing world or other worthy causes. If you're interested in this, consider taking their Pledge as a formal and public declaration of intent.

Nectome is building the first brain preservation technique to verifiably preserve your memories for the future.

80,000 Hours researches different problems and professions to help you figure out how to do as much good as possible. Their free career guide show you how to choose a career that's fulfilling and maximises your contribution to solving the world's most pressing problems.

Metaculus is a platform for generating crowd-sourced predictions about the future, especially science and technology. If you're interested in testing yourself and contributing to their project, check out their questions page

Jane Street is a quantitative trading firm with a focus on technology and collaborative problem solving. We're always hiring talented programmers, traders, and researchers and have internships and fulltime positions in New York, London, and Hong Kong. No background in finance required.

Altruisto is a browser extension so that when you shop online, a portion of the money you pay goes to effective charities (no extra cost to you). Just install an extension and when you buy something, people in poverty will get medicines, bed nets, or financial aid.

Beeminder's an evidence-based willpower augmention tool that collects quantifiable data about your life, then helps you organize it into commitment mechanisms so you can keep resolutions. They've also got a blog about what they're doing here