1,009 Responses to Open Thread 84.75

When I google that phrase, I find lots of people claiming to give lists, but they are just true conspiracies, not things that were rumored or leaked long before becoming the conventional wisdom. Ideally I want things that were dismissed as conspiracy theories. The lists make no attempt to document this and don’t really seem to even claim it. The CIA is a conspiracy. Everything it actually does is “a true conspiracy theory.” But what did people talk about? The top of all of these lists is MK-Ultra. But was that ever a conspiracy theory, or did it jump straight into consensus reality when the NYT published it in 1974?

The first thing that comes to mind is mass surveillance programs; a lot of rumors and accusations circulated in the late Nineties before the programs were openly disclosed in the post-9/11 world. ECHELON, etc. But I’m not sure where to draw the line between a conspiracy theory and an open secret here.

I think that is a good example, but it seems so hard to demarcate that it may demonstrate how hard it is to evaluate examples.

One problem is that the NSA’s job is to collect information and most of the uncertainty is the continuous question of how good it is at that job. It easy to not notice disagreement on continuous questions, particularly if people aren’t precise. One qualitative point is spying on allies. My impression is that it was widely accepted in France and Germany that the Five Eyes conspired to spy on them, while this was rarely discussed in English, probably because it was seen as conspiratorial.

The idea that there were all these communist spies in the US government was deemed a conspiracy theory, though, wasn’t it? Then the numbers show there were a whole bunch. Additionally, there were individuals presented as innocent – accusations to the contrary presented as conspiracy theories – who in retrospect were guilty.

Julius and Ethel Rosenberg come to mind. My history textbooks, though generally good, still listed them as casualties of the Red Scare in the ’90’s. (In fairness, those books may have been published before the former Soviet Union declassified information confirming their spy status).

My recollection is that, more or less, VENONA plus (much later) declassified former Soviet information strongly suggested that Joe McCarthy was right. At least, right in saying that crucial U.S. institutions including the military and State Department were widely infiltrated by Soviet spies (he was wrong on having evidence and on several specific examples). People today still decry “McCarthyism” as paranoia even though, in broad strokes, he was correct. That strikes me as a conspiracy theory being true (ish) but which was nevertheless so widely derided as to still be considered a conspiracy theory.

I dunno, I think there’s a meaningful difference between “This guy was a witch hunter and witches don’t exist” and “This guy was a witch hunter and he didn’t properly identify who the witches actually were.”

It might not vindicate McCarthy, but it certainly proves his most ardent critics wrong as well,

Also, “the red scare” as a whole is certainly a concept that continues to be taught and perpetuated, even independently of McCarthy as an individual.

To the extent that communist spies actually did exist, it was not a “red scare” but rather a “correct estimation of foreign intelligence activity” and hence, it was something dismissed as conspiracy theory that ended up being true.

I worded that poorly, but I was referring to the entire Red Scare when I said McCarthy. My understanding is that it’s generally accepted that everyone convicted of espionage, during that era, was rightfully convicted.

Everyone was worried about communist spies. McCarthy was nothing special.

His main complaint was that State refused to fire known communists, ie, that they investigated and cleared people, rather than just accepting every accusation at face value. His insinuation that these investigations were corrupted by communists seems to be completely false. It is harder to assess whether the strategy was sound, whether the investigators made a good trade-off between false positives and false negatives.

JayT, of the 9 he named to the Tydings committee, were any guilty?

There were about 10 people convicted as espionage, all guilty. But this was a small part of the Red Scare. There were about 100 people jailed for contempt of court and contempt of Congress. They were all guilty of contempt, but I’m pretty sure it was bad to jail them. There were about 1000 people jailed for being leaders of the communist party. They were all members, although I’m not sure they were all leaders. Was jailing them the right thing to do? I’m not sure.

Everyone was worried about communist spies. McCarthy was nothing special.

Almost no one in the truman administration worried much about spies prior to the emergence of the red scare as a political phenomenon, and after they worried far more about the damage it was doing to them than actually catching spies. They put considerably more effort into the lavender scare than the red scare, which is ironic, because one of the primary justifications for the pink scare was that gay men might be blackmailed into becoming spies. Much the same was true of the eisenhower administration.

His main complaint was that State refused to fire known communists, ie, that they investigated and cleared people, rather than just accepting every accusation at face value. His insinuation that these investigations were corrupted by communists seems to be completely false.

no, it wasn’t. For almost any spy in the period in question, one can give an example of them being accused of spying, only to have fellow spies of fellow travelers intervene to make sure that, at most, they were transferred somewhere else. That’s how the spy rings worked.

It is harder to assess whether the strategy was sound, whether the investigators made a good trade-off between false positives and false negatives.

Considering that the CIA’s official, post cold war assessment of the situation was that no country in history has ever been more thoroughly penetrated by spies than the US and UK governments was in the 40s and 50s, it’s safe to say that their strategy was definitely not sound.

He never named many people. Of the 9 he named to the Tydings committee, were any guilty?

the standard here isn’t guilt or innocence, none of those people was on trial for espionage. it was reasonable security risk or not.

no, it wasn’t. For almost any spy in the period in question, one can give an example of them being accused of spying

This seems to be mostly a truism: you only know about them if they were exposed. Exposing them involves them being accused of spying (which can consist of them accusing themselves).

We know for a fact that some spies were active for long periods and then turned themselves in (like Elizabeth Bentley and Whittaker Chambers), so they were never accused before they exposed themselves.

It’s fairly obvious that most exposed spies are exposed after being accused, as every person who turns himself in fully voluntarily will tend to accuse many other people. The spies that are not exposed tend to remain unknown.

Finally, your claim that spies would effectively cover for each other lacks evidence. I can’t remember ever seeing a case of that in my research, nor does it seem credible that a person can commonly just be saved like that. I concur with Douglas that if that is common, you’d surely be able to give a few examples.

My understanding is that it’s generally accepted that everyone convicted of espionage, during that era, was rightfully convicted.

Tsien Hsue-Shen was never convicted, just incarcerated for several years and then deported. But it’s pretty clear that he was at that point guilty only of attending meetings were people tried to convince him to become a Communist (he was a university professor in Southern California in the 1930s through 1950s, so duh) and maybe sub-Hillary levels of carelessness with classified information.

At that point. After that point, he was an A-list rocket scientist living in China with Chinese citizenship and the support and acclaim of the Chinese government, so he went to work designing rockets for China. Mostly for use against the Russians or in peaceful space travel, but about twenty of which are still aimed at the United States with five-megaton thermonuclear warheads. Way to go, Tailgunner Joe.

you only know about them if they were exposed. Exposing them involves them being accused of spying (which can consist of them accusing themselves).

Is this really correct, in the case of Soviet spies? The Venona papers weren’t declassified until 1995, and I was under the impression that most Soviet spies were uncovered either through Venona or through post-Soviet releases of KGB records. The latter didn’t involve public accusations, for obvious reasons, and I don’t believe the former did either. I’d be interested in hearing of any cases of “parallel construction” of an evidence trail originally obtained via secret decrypts, I suppose. But the appropriate response to covertly uncovering a spy’s identity isn’t “Hey, our decrypted Soviet messages say this guy’s a spy! Did you know that we can decrypt Soviet messages?! And that we know about this spy, but presumably not about any others we haven’t accused yet?!” At best you should feed them false information, at worst you find some plausibly-non-espionage-related excuse to move them away from sensitive true information.

“There were about 1000 people jailed for being leaders of the communist party. They were all members, although I’m not sure they were all leaders. Was jailing them the right thing to do? I’m not sure.”

I had no idea about this. People were jailed just because it was the Communist Party, or because there was evidence of concrete treason by the party? This just sounds… very Unamerican.

Wikipedia says that there were only ~140 (postwar) indictments of the CPUSA members under the Smith Act, leading to ~100 convictions (and ~70 wartime indictments of others, mainly fascists and Trotskyites). While Congress eventually atainted the Party, no one tried to enforce that. The Smith Act required prosecutors to prove the guilt of the Party, but not of the individuals. The prosecution did a pretty lousy job of proving it, but the Party was guilty of other things.

The Wiki article on the Smith Act is interesting. At one point a bunch of defendants who held either isolationist or pro-fascist views were indicted.

Roger Baldwin of the ACLU campaigned against renewing the prosecutions, securing the endorsement of many of the defendants’ ideological opponents, including the American Jewish Committee, while the CPUSA held out for prosecuting them all to the limit.

That would have been about five years before the CPUSA leadership, and then some of the membership, were prosecuted.

MK Ultra was public knowledge until Ted Kaczynski was discovered, at which point it became relevant in the public consciousness — something that had real-world consequences. Similarly for ECHELON, which began as whispers, then an open secret as it grew to include the UK and Australia, then public knowledge after 9/11, and finally relevant after Wikileaks and Snowden. You could also include Area 51, which was a longtime open secret that became public knowledge, although diffused out of relevance because the truth (testing ground for special but not-alien aircraft) is underwhelming.

The Katyn massacre certainly counts, but probably conspiracies during war or organized by totalitarian regimes are not interesting, and Katyn was both.

To get true conspiracy theories you would need a secret to be partly exposed, but the people who know about it (or who it adversely affects) to not have the resources to prove it quickly. So one good place to look is for organizations dealing with colonial subjects or a domestic underclass.

The Katyn massacre seems like a good example, but one that requires expanding my definition. The Soviet hypothesis was widely discussed in the Western press, at least back to the end of the war, so it wasn’t dismissed. But it failed to reach consensus, probably because of the lies and omissions of the Western governments.

This doesn’t qualify as a conspiracy ‘theory’ per se, but….. The Thuggee cult in India is a pretty good example of a real-life conspiracy/secret society, albeit in a premodern society. At least according to the Wiki page their very existence was unknown before the British publicized it.

I suppose any secret society qualifies as a conspiracy, though I’m not really clear how many of them (Freemasons, Knights Templar etc.) were ‘real’. The Thuggees were one of the few to be actually be documented and prosecuted

The official story on 9/11 is a conspiracy theory (it’s illegal to hijack a plane, and the terrorists worked together to do it, making it a criminal conspiracy). Once you accept that all 9/11 hypotheses are conspiracy theories, we’re just trying to find one that fits the most facts.

The official 9/11 story is easily debunked. For example, Osama Bin Laden repeatedly denied involvement, and the tape where he admits it appears to not be him (they call this the “Fatty Bin Laden” tape, because in the midst of dialysis treatment, Bin Laden appears to have gained 30-50 pounds, and his nose has changed shape).

The whole point of terrorism is to take credit for your misdeeds. Bin Laden denied it, so more investigation is needed to determine guilt. Alternate hypotheses that depart from the official narrative have more explanatory power, but all require further investigation via physical evidence to have been “proved.”

Here’s a list you may want to consider. These are the ones that have been confirmed by official sources. Take your time as you descend the Rabbit Hole.

There Stalin actually was a dictator who actually murdered millions and actually was orchestrating a global conspiracy in service of himself and communism, agents of which actually were active in the very highest levels of the US government, revealing its secrets for largely ideological reasons. This ceased to be true by the mid-50s, but all of it was dismissed as crackpot.

I’m not even close to an expert on any of this, but would some of the pharisees criticisms of Christ apply here?

At the end of the day, regardless of whether he’s the Son of God or not, he DID in fact promote heresies, resulting in raising his cult of followers up and vastly expanding it in such a way as to greatly diminish their control and influence over the Jewish religion.

Examples? There are certainly real child abuse conspiracies, but child abuse conspiracy theories seem to be highly anticorrelated with reality – if there’s any significant public traction for “X, Y, and Z are molesting children and getting away with it!”, whether in pizza restaurants or preschool tunnel complexes, the safe bet is that X, Y, and Z are not actually molesting children. I can’t think of any counterexamples where the theory turned out to be true; there probably are a few on stopped-clock grounds but they seem to be rare.

It’s almost as if the actual relevant authorities are both good at and genuinely interested in investigating allegations of child abuse, so that only the most effectively secret and thus un-theorized conspiracies can endure past the first allegation.

I definitely agree that the relevant conspiracies are usually right on stopped-clock grounds, but they are true nevertheless. So most cases involving notable people probably count. I think Rotherham is another example.

One example is that the Dutch army had training accident with a mine, where someone died. They asked a new employee to tell the widow that the dead man had made a mistake, but the new employee thought it was fishy, so he investigated and found out that there had already been extensive reports that the design of the mine was faulty. He sought publicity and the department of defense then got him declared paranoid and schizo, but this was based on a falsified psychiatric report.

Eventually the ministry of defense were forced to officially declare that they had misled people for 18 years.

I don’t know how many commoners believed that this was a true conspiracy and I suspect that in most cases, such a belief grows over time as more evidence comes available (while the authorities cannot then back down, even if they start to look like fools for telling fairly obvious lies).

PS. Interestingly the official records are sealed until 2026 and there is a conspiracy theory that these mines had been stored at the site of a large fireworks explosion. So we have a conspiracy theory that became a proven conspiracy that begot a new conspiracy theory…

Was originally spun as “random unforseeable protests due to a youtube video, nothing we could have done” by all the official people in charge. Right-wing talk radio called foul right away, and were dismissed as paranoid conspiracy freaks. Exactly how right they were is probably still up for debate, but it now seems reasonably well established that they called for help, were given none, and that the youtube video had nothing to do with it.

How widely known or particular in details the conspiracy theory should be that it counts? The conspiracy theories that are true seldom are specific and wildly known and officially dismissed before they are exposed beyond doubt.

I am not sure if (before Snowden) there were many coherent conspiracy theories about how NSA spies on all of the internet, just vague rumors and an old report about ECHELON. I can’t remember any particular dismissals because there was only speculation, but if you had outlined the PRISM or XKeyscore program to a regular people, you would have sounded like a conspiracy theorist to most. That’s example of “theory not specific enough.”

Likewise, I heard about Chicago police’s black site after it was reported by the Guardian and was “big news”. But there might’ve been some local talk in Chicago before the Guardian article; that might be example of “not widely known / rumored enough”.

Oh, this isn’t a “before Snowden” thing at all. It’s been pretty much openly acknowledged for a long time that the NSA was doing data collection on a massive scale; the Snowden leaks filled in a lot of the details and brought it into public consciousness in a way that it hadn’t previously been, but they did not fundamentally change the conversation, at least in the infosec-and-miscellaneous-paranoid-crypto-geeks community.

As I mentioned above, the event that really changed all this was 9/11. That’s when the question stopped being whether the NSA was spying on Americans and started being how much, although certain programs were at least widely rumored even before then.

I am not sure if (before Snowden) there were many coherent conspiracy theories about how NSA spies on all of the internet, just vague rumors and an old report about ECHELON.

There were at least THREE major revelations. ECHELON in 1972 (!), AT&T Room 641A in 2006, and Snowden. In between those revelations, everyone went back to calling people who claimed the US government was engaged in mass surveillance “tinfoil hatters”.

In between those revelations, everyone went back to calling people who claimed the US government was engaged in mass surveillance “tinfoil hatters”.Report

We must travel in very different circles. Most people I know, who (partially for family reasons) include a disproportionate number of the kind of people who get invited for job interviews at the NSA, thought that the surveillance revealed by Snowden was more modest than what they expected.

Most people I know […]thought that surveillance revealed by Snowden was more modest than what they expected.

Seconded, and see also the discussion at e.g. Bruce Schneier’s blog. Everyone paying attention, understood that the US government was monitoring essentially all digital communications in the US and the interesting discussions were about which parts they paid specific attention to. The Snowden revelations were a whole lot of “that’s all?” with side orders of “that’s neat!” and “maybe now people will care!”.

To the extent that people who weren’t paying attention came up with anything that could be called a “conspiracy theory”, they were pretty much wrong about all of the details beyond the USG monitoring lots of stuff.

Yes, no one with connection to the intelligence or infosec community was surprised.

I’m talking about those who weren’t so connected; ordinary techies and complete outsiders. And pointing out that even they should have known better, seeing as similar programs had been publicly revealed at least twice before.

1)Lower middle class black folks, who are predisposed to hold hostile/paranoid views about the spooky parts of our government (for reasons).

2)First, or second generation Latin American immigrants, who are predisposed to hold hostile/paranoid views about the spooky parts of our government (for other reasons).

3)White leftists/Libertarians.

4) People with professional, or family connections to infosec, the military, the military industrial complex, or the secret squirrels.

You’ve exhausted the majority of my social circle. So I may have some trouble grocking what the normies thought of the NSA.

Pre-Snowden the biggest thing I would have expected, was that the five eyes were circumventing constitutional privacy protections by outsourcing surveillance to their sister agencies. E.g. the NSA using GCHQ to spy on American citizens. The Snowden documents have actually caused me to substantially downgrade my prior probability on any large scale version of that theory.

Anthropologists in the 1950s and 1960s believed that they were the victims of a long-term, broad-based, government-conducted, McCarthy-led conspiracy to suppress their research and intimidate individual researchers.

Price [in Threatening Anthropology] draws on extensive archival research including correspondence, oral histories, published sources, court hearings, and more than 30,000 pages of FBI and government memorandums released to him under the Freedom of Information Act.

He describes government monitoring of activism and leftist thought on college campuses, the surveillance of specific anthropologists, and the disturbing failure of the academic community — including the American Anthropological Association — to challenge the witch hunts.

Today the “war on terror” is invoked to license the government’s renewed monitoring of academic work, and it is increasingly difficult for researchers to access government documents, as Price reveals in the appendix describing his wrangling with Freedom of Information Act requests.

A disquieting chronicle of censorship and its consequences in the past, Threatening Anthropology is an impassioned cautionary tale for the present.

Once more into the breach, dndnrsn! I’m with you, ham and pineapple is quite fantastic, for the usual reasons of mixing sweet and savory and salty so as to provide balanced contrast in a dish. Canadian bacon and other such meats are a suitable substitute- at home, I’ve made it with smoked pork loin as well.

I enjoy adding jalapenos as a third ingredient sometimes for variety, but the simpler pairing is probably my preferred taste.

Burn the heretic! I’m all for combining sweet and savory (some Asian dishes do it quite well), but if there were a devil, pineapple would be his pizza topping. And ham is also, as others have quite accurately noted, a most inferior member of the pantheon of pizza meats.

Well, I’m sure you could make a bad pepperoni pizza, but it seems harder. My theory is that people tend to use very thin slices of pepperoni because it’s strongly flavored. With ham, you get big greasy slabs of it.

I’ve never had a pizza that has been soaked with grease as a result of pepperoni. I’ve had many pizzas that were almost inedible because of the amount of grease bacon or ham topping had left. Bacon is probably my least favorite pizza topping, worse even than chicken.

My favorite pizzas stretch what’s allowed to be called a pizza just a little bit, but they are:
– Replace the pasta sauce with beans+salsa. Use mexican cheese blend instead of mozzarella. Toppings include avocado, pico de gallo, and onions.
– Replace the pasta sauce with pesto diluted with spinach. Use mozzarella plus feta for the cheese. Best toppings are artichoke hearts, black olives and broccoli.
– For a regular pizza, more toppings is basically better, but my favorites are green olives and mushrooms.

For meats, I like pepperoni, bacon, and grilled chicken, and for veggies I like onion, peppers, olives, and garlic. I generally like having a combination of 1 or 2 of each of meats and veggies on a pizza. And I like having a lot of garlic in the tomato sauce if not put on as a topping.

The sogginess is the best part of the mushrooms. I hate it when they get all dried out. Not as much as I hate raw mushrooms on a pizza though. That’s just an abomination. They need to be sauteed in butter before being put on the pizza.

While I wouldn’t go so far as to say it’s delicious, cold leftover pizza is pretty good, and better than reheated leftover pizza in most cases. ETA: Though, admittedly, it’s mostly pizzas with meat toppings that are actually good cold. A cold slice of cheese or veggie pizza will usually be just okay.

One of the reasons to order a pizza is that after you enjoy the deliciousness of warm pizza, you get to put the rest of it in the fridge so that you can enjoy the totally different deliciousness of cold pizza later.

Cold pizza is OK. I’m never deterred from it but wouldn’t seek it out.

Related: cold Chinese or Indian leftovers are almost as good as reheated. No problem eating those cold. The big mystery is why they’re never even half as filling the next day as they were the night before.

Generally good choices, but contrary to the common theory that pizza is hard to screw up, some places manage it, and bell peppers are one of the ingredients that can end up kind of gross if you get them from the wrong place (mushrooms as well, for that matter, though screwing those up seems to be less common). But I suppose the lesson of that is don’t get pizza from such places.

I wasn’t going to answer this but I’m hungry now so it’s fun to think about.

My ideal pizza is deep dish of course. Thin crust pizzas are occasionally good but I’m never going to get as excited about a thin crust as a deep dish.

Topping-wise, my ideal pizza could go one of two ways depending on whether I’m in the mood for a “red” pizza or a “white” pizza, so named because of the type of sauce used. My mood usually will depend on what else I’ve eaten that day and to some extent how cold it is outside. (Cold and eaten healthy all day = red pizza time.) (It’s much more permissible for the white pizza to be thin crust.)

Red: spicy italian sausage (preferably in large jagged chunks, as though it was crumbled by hand by someone with severe arthritis), onions, bell peppers, black olives. Don’t be stingy with the cheese. Anchovies on the side. Every other slice will receive a liberal sprinkling of crushed red pepper.

White: roasted artichoke, spinach, roasted garlic, onions, black olives, tomato slices, and basil. The cheese is fresh mozzarella (the kind that comes in a ball so it’s cut into circles). The white sauce is actually green because it’s pesto.

Grated Parmesan will be liberally sprinkled over either pizza.

Toppings I am opposed to:
– Mushrooms
– Pineapple
– Chicken
– Bacon
– Pepperoni if it’s those thin wide discs…pepperoni should curl up into thick little bowls
– “Buffalo chicken” pizza, “barbecue” pizza, “cheeseburger” pizza, “taco” pizza, and other gimmicky abominations. If you want a cheeseburger just order a cheeseburger. If you want a taco just go get a taco. Cheeseburgers and tacos are phenomenal. To adapt Hank Hill: “Can’t you see you’re not making those foods any better, you’re only making pizza worse?”

Civil Aviation: Part 5
I should start talking about safety by emphasizing that air travel is very safe. It’s about an order of magnitude safer per mile than a bus or a train, and two orders of magnitude safer than a car. There is literally no safer way to travel long distances, and a tremendous amount of work goes into that, because an airliner is massive and complex, and is constantly trying to break. This post should not change your opinion about flying, but if that’s something you’re really nervous about, you probably should stop reading now.
My previous job was as a small cog in the machine that sees these problems before they get out of hand, and fixes them. I’ll outline the process I participated in (this is a composite of various experiences):
Aluminum is a wonderful material. It’s light and strong. But it also cracks easily. So there are hundreds of required inspections done at various intervals on every airliner out there. One day, during one of these inspections, someone notices a crack in a fitting in the wing root. It’s rather large, so they call their engineering department, who looks at it, measures it, and checks the various manuals which tell them what they can fix on their own. (It’s said that an airliner is ready when the weight of the paperwork is about the same as the weight of the airplane, so there are a lot of those.) They decide they don’t have the authority to fix it, so they call the manufacturer. This is called an AOG (Aircraft on Ground), and it’s a big deal, because the airline isn’t making any money on the plane while it’s doing that. The manufacturer has a team of engineers who look it over and come up with a quick-and-dirty fix to get the plane back into the air. (In this case, quick-and-dirty means that they have to watch the area very carefully to catch any further cracking, but so long as nothing is found, it’s safe.) They then go to the next crisis, which is probably a truck running into a different airplane on the other side of the world.
The reports from the wing root crack get turned over to another group, which analyzes them to see if there’s a systematic problem. Maybe the designers were a bit too aggressive when they did the fitting in question, and we need to replace it with a new design. So the design group is called on to figure out a better design. If the plane is still in production, the most important thing is to change the part being put on in the factory. Then, a service bulletin is issued to change the aircraft that are already in the fleet. This is the legal vehicle for making a change to an existing airplane. There’s a tremendous amount of regulation around the configuration of airplanes, to make sure that nothing gets on that isn’t approved. There’s a group of engineers who figure out how and when to make the change, and what other alternatives the airline might have, and then they hand it off to another person to write up (my old job). This is not as easy as it sounds. It’s a legal document, which means it has to be written very precisely to make sure that the airlines have to do what the manufacturer wants them to do, and it’s going to be going to airlines all over the world. Also, the FAA, who approves it, has very strict standards. And engineering is constantly refining their solution, often up to the last minute.
Eventually, the bulletin goes out. In this case, it mandates new inspections for cracking in the fitting, and if there is cracking, or if the airline wants to get out of the inspections, gives instructions for changing to the new fitting. A team goes out and makes sure that the replacement procedure actually works. (This is much more helpful than you’d think. Even the best drawings/computer models are sometimes inadequate, and when the plane in question is old and has a bunch of changes in the area, it’s impossible to know what the fleet looks like without going out and looking at it yourself.) If it doesn’t work, then a revision is issued. The FAA mandates that the bulletin be done, and the world’s other aviation authorities follow somehow. (I’m not actually sure how.)
20 years later, the airplane is out of production. Someone is doing another routine inspection in the area, and discovers a crack in the wing skin. They call the manufacturer, and more analysis is done. This time, they discover that by stiffening the fitting to stop it from cracking, they moved the stress into the wing skin, which is now cracking. This is obviously very bad, and alarm bells begin going off. A high-priority project is put together to get the wing skin in that area inspected before there’s an actual failure. This could mean that an SB is out and mandated by the FAA in a matter of days (normal time from project start to mandate is ~2 years), but that’s pretty rare. Usually, the first pass goes out with instructions to contact the manufacturer if problems are found, while a second round is done to figure out a long-term solution that can be written up in a revision to the Service Bulletin. The airlines are interested, as they want a solution that is as cheap to implement as possible, preferably one covered by the manufacturer. (The warranty is long expired, but that doesn’t stop them from asking.) Eventually, the chosen solution involves disassembling part of the wing, removing a whole section of skin, and reinstalling a replacement. This is annoying, and leads to an early retirement of part of the fleet. This is seen as good news by some people at the manufacturer, as it’s fewer of the things they have to support.
This same process happens on every other part of the airplane. Every part of the next airliner you get on is rigorously certified, carefully inspected, and everything is documented to make sure that no problems are missed. Yes, occasionally an airline messes up and flies a plane that isn’t fully compliant. But this is ultimately a non-issue. Everything is chosen to be very conservative (the inspection standard is to give two good chances for crack detection between when the crack becomes detectable and when it gets critical), so the only damage is to the airline’s bottom line when the FAA notices. Airlines outside of the western bloc are not always so careful to comply with safety directions, although the international airlines are pretty good.
I’m not sure I’ve said everything I want to on safety yet. I’ll probably talk about the causes of airplane crashes later on. Does anyone have other aviation topics they’d like me to discuss in the meantime?

What would ultra-economy air travel look like if airlines were allowed to radically sacrifice passenger comfort but not safety? I’m thinking something like strap-hangers in a full subway car, but perhaps there are other ideas.

The problem is that does compromise safety. One of the things I’ll probably cover next time is safety in case of a crash (survivable crash, that is, which is more common than you might think). The most important thing there is being able to get out. That comes in two parts. First, being packed in like a subway car hinders evacuation. Subways get away with it because they have lots of big doors. Not practical on an airplane. Second, that sort of seating is unlikely to be as safe in a crash. More injuries leaves more people on the plane to die of smoke inhalation.
There’s currently a dispute between the FAA and certain passenger rights groups over the safety of current seat pitches. 28″ comes out OK in evacuation tests, but the tests are mostly run on skinny people who are paying attention. I’m not sure who is right here. I think the flyer’s rights people are idiots who don’t understand basic concepts like ‘the plane costs basically the same to fly no matter how many people are on it, so reducing the number of passengers we can put on it by mandating increased legroom must increase fares’. But they do have a point about the testing standards. On the other hand, I’m not sure that 28″ is that much worse than 30″, no matter what your standards.

The limit on airliner passenger capacity is set not by the number of seats, but by the number of people that the manufacturer can demonstrate will be able to escape ahead of the fire and smoke that will likely fill the cabin soon after a crash. And the manufacturers have already reached the point of regularly injuring people, sometimes severely, in these demonstrations. So while you’ll occasionally hear some joker talk about standing-room-only airline flights, these are rarely serious proposals and it is hard to see how they could come to pass.

They often fly quite a bit below the rated capacity of the planes, particularly on longhaul jets. On the other hand, those are longhauls, and comfort is worth a lot more on those kind of flights. Also, the premium cabin is bigger, which brings down total numbers. I’m not sure how the rules are written now, but I’m pretty sure that the FAA would not be amused if you tried to do standing economy and palatial first with the same passenger count as a currently legal configuration.

@Matt M
I really would not think that. But perhaps I’m just cynical about the lower end of the traveling public, who, so far as I can tell, operate on a very simple algorithm:
1. Buy whatever looks cheapest online, doing no research into reputation or fees
2. Complain that they aren’t treated well
So far as I can tell, these people would buy stand-up seats if they were $10 cheaper, and then gripe about it at great length.

Ryanair has managed to get itself into real trouble this time; there are a rake of cancellations due to not having enough pilots to cover flights and their pilots are not willing to give up annual leave to come fly their planes for them – to be fair, knowing Michael O’Leary, the seeming offer of “we’ll pay you 12 grand to give up your leave and fly the planes” is probably full of small-print conditions, which seems to be the case, so I don’t blame them for not believing they’ll actually get paid if they give up their time off. Plus this is the result of years of the abrasive management style coming back to bite the airline in the backside.

The problem is that Ryanair ran an April-March vacation year (which lines up well with how air travel demand in Europe goes) and was told by the Irish aviation authorities to switch to calendar year last year. Not sure why they didn’t do more in the first couple months of the year, and it might have been exacerbated by pilots leaving for other airlines. I think they’re trying to clear the books before the holidays.
WRT O’Leary, he seems to have genuinely figured out that his old strategy needed to change. Ryanair has gotten a lot better over the last few years. I genuinely have no idea how reasonable his terms are. The article seems convincing, but it’s from a pilot group (who have obvious biases), and the one-year delay is almost certainly an attempt to stop pilots from jumping ship. Particularly if you’re handing out large chunks of cash, that’s probably a good idea.

I semi-regularly fly in and out of odd places on “regular charter” flights. I usually get charged for my seat, and an overweight charge for my bodyweight plus my luggage going over a threashold. And no cheating by stuffing the pockets of my clothes, I have to step on the scale while they are filling in the load manifest just prior to departure.

There was a huge stink a few years ago about an airline charging for weight. Which died out very quickly when the facts came back: it was a small regional airline operating the between small islands in the Pacific Ocean, mostly operating turboprop floatplanes, and mostly flying passengers of Samoan and Tongan ancestry. Damn straight they are going to weigh every passenger on load, and charge for poundage!

I have heard (from someone who spent a lot of time in the Soviet Union) that Aeroflot used to operate standing-room-only internal flights. I’m not sure how true this is, and (if true) if it was a regular thing or cargo aircraft pressed into service due to a shortage of passenger planes.

the inspection standard is to give two good chances for crack detection between when the crack becomes detectable and when it gets critical

To elaborate on this part, lest it sound reckless or cavalier, the inspection standard must not be “no cracks, are you crazy, we’re not flying anything with cracks in it!” Everything made out of metal has cracks in it from the day it is built, and with a few exceptions (mostly massively overdesigned steel structures far too heavy to fly) the cracks will grow continually until the item is removed from service one way or another. So “no cracks” really means “no cracks that I can see”, and chasing away any pesky busybody peering at your airplane with a magnifying glass lest he see the cracks, and scrapping perfectly good airplanes because cracks in one location were harmless but visible, and then having lots of airplanes crash because even invisibly faint cracks in another location turned out to be dangerous.

So we calculate how big the cracks can be before they are dangerous, and how fast they might plausibly grow, and as bean notes back out an inspection schedule and methodology that gives us at least two chances to catch them. Or, if that’s not practical, set an absolute maximum service life that will retire the part before the cracks could become dangerous.

I 100% endorse this. Very close attention is paid to this sort of stuff, to the point where in a couple of cases, I’ve seen very aggressive reactions to problems that I think were almost certainly caused primarily by something more direct that wouldn’t be common across the fleet. (For example, the plane with the wing root crack had a really hard landing a few cycles ago, but we’re going to push out an inspection on all airplanes immediately.) On the other hand, I was proved wrong in the only case that I’ve seen the full follow-up on.

Mostly micro, but there are probably some in the millimeter range that you’d find if you look closely. A common inspection technique is to let a penetrant dye soak into the part and then wipe clean the outer surface; it can be surprising how many macroscopic cracks you didn’t notice until then.

My airplane currently has two cracks in the centimeter range, not in primary structure and being measured annually.

Everything made out of metal has cracks in it from the day it is built, and with a few exceptions (mostly massively overdesigned steel structures far too heavy to fly)

You’ll have to tell me what steel structures these are. A solid portion of my job is inspecting steel structures on dams, or bridges on government property, and I’ve not found one without a crack yet, especially if you’re talking about using NDT.

I actually look hard for cracks until I find a new one, because that emotionally validates spending $3000 to send me out to do the inspections (they’re required on a schedule, but I feel like I’m being lazy if I don’t come back without finding something.)

Right. With steel, it is at least theoretically possible to design structures where the inevitable cracks don’t grow in normal operation, or in three-sigma worst case operation or whatever standard you chose. It is more expensive to do that than it is to do “well, some of the cracks grow a little bit but meh we’re over budget already”, and I’m not in a position to say which standard the median bridge designer actually works to.

The cracks are there from the beginning and unavoidable. If they’re getting worse, in something like a bridge or a dam, that probably shouldn’t be happening.

If they’re getting worse, in something like a bridge or a dam, that probably shouldn’t be happening.

Every bridge you’ve driven over probably has cracks that are growing. If they’re brand new, they may not have any yet, but wait a little while. They’re monitored by bridge inspections, but they are there, and will require the eventual replacement of the bridge.

Like in a plane, keeping everything below threshold stresses isn’t practical. (Well, IIRC there’s no threshold on fatigue in aluminum, but I mean at least lowering the stresses until you’re way to the right on the S-N curve so as number of cycles makes no difference) Corrosion will eventually get you anyway, so trying to chase out every last bit of fatigue probably isn’t worth it. You generally accept that bridges or other structures where high-cycle loading is a problem (like lock gates or some types of dam hydraulic control structures) will need to be replaced, or at least undergo very substantial repairs, due to cracking and corrosion. 50 years was the typical design life until recently. They’re talking now about 100-year design lifetimes, but this will probably require substantial use of stainless steel and the associated expense.

Now, many older structures are worse than they would be if they were built today because they have incredibly poor detailing (intersecting welds is a good example). However, this is mostly a function of us not understanding fatigue very well until the 70s, and we have substantial stocks of older bridges. I don’t know of anybody that expects that even modern bridge designs won’t have cracking.

How practical would tourist class sleeping accommodations be, where by “tourist class” I mean not a lot more expensive than current tourist class flights? I would think it would be possible to configure a plane with about the same number of passengers lying down as sitting up. Ideally one would want a design that let you switch, convert seats to beds and beds to seats. Failing that, you could have a design that was part seats, part beds. Obviously you would need a design where beds were stacked at least two high, perhaps three high.

Current long distance business class does what I am describing, but at a high cost and without stacking, and packing passengers a lot less densely.

I’ve also always wanted bunkbeds in cattle class. I am guessing the limit is evacuation, since it’d be bloody hard to get (especially less mobile) people out…

The other thing I wonder about is motion sickness–I hear a lot of complaints about anything other than upright seats facing forward causing it. (I go out of my way to choose rear-facing seats on the Google shuttlebusses when I’m in the Bay, because I find them *more* comfortable, but apparently I’m unique.)

This used to be common back in the 40s and 50s, before jets arrived. But I don’t think it’s particularly practical today. First, seats are smaller. I wouldn’t want to try it with a seat pitch below 36″ or so, which is firmly in Premium Economy territory today, but was common then. They’re also narrower, although that might be solvable by going 3-high. Second, this is going to be heavy, and cut into the overhead bin space. Third, safety standards are a lot higher. You’ll get badly sued if you have someone fall out of the top bunks, or if they get hurt climbing in. (And I have some trouble getting in and out of the top bunks on Iowa, which are probably of similar size and height, and I’m small and pretty nimble.) Also, you can’t build seats like you used to. Fourth, it’s not worth it on most routes now. You’ll need at least an hour on each end to get things settled, which rules it out on transcons and transatlantics. Only on transpacifics does it even remotely make sense. Fifth, the passengers have changed. No longer can you assume that everyone will want to go to sleep. If I’m on a different clock than the guy next to me, maybe I want to work now and sleep when I get there. Working from my seat is easy. Working from a bed I have to get into is not. Also, the passengers are just generally less cooperative today. You might find a niche on charter flights or something, but it’s not worth it for the seat manufacturers or the airlines to develop it.
Evacuation might or might not be a deal-breaker. Most crashes happen very close to takeoff or landing, and so long as you had everyone in their seats then, it wouldn’t be a big deal (provided the bunks didn’t come loose and hit people). I can’t think of any cases where the plane had a mid-air problem that wouldn’t have given the flight attendants time to at least prep the passengers in the bunks, and probably get them out of them, before a landing requiring evacuation. (Obviously, if the plane just blows up, they’re dead anyway.) On the other hand, this kind of logic rarely works on the FAA.

The thing you want to look at is the standard European railway seating compartment. In daytime configuration, this is two rows of three seats facing each other. At night, the seats form the bottom bunks, the seatbacks fold out to form a middle row, and a top row folds out from near the ceiling, giving sleeping accommodation for the same 6 people. Normal etiquette is to sleep in your clothes rather than getting changed.

There still seem to be a couple of differences. Trains are generally more spacious, and I think the load factors are a lot lower. If you have an average of 3 people in those 6 seats, and the aisles are relatively big, then getting the stuff switched over isn’t that hard. If you have 5 people in those seats and small aisles, then you’re basically restricted to having maybe one out of 6 clusters switch at a time. If that takes 5 minutes, then it takes 30 minutes to change over everything in the plane.
And again, everyone on an intra-Europe train is in more or less the same time zone.

What if, rather than chairs, the resting arrangement is with the body fully extended, as though standing, but supported by leaning back at an angle? Perhaps 20 degrees, 45 degrees, or even 70 degrees (nearly horizontal)? It would make bathroom trips by outer (i.e. window) passengers pretty awkward.

The more extreme angles would seriously hinder evacuation, which means it’s a regulatory no-go. Standing chairs (not to dissimilar to the lower angles) have been proposed a couple of times, but never gotten anywhere.

Given your description, wouldn’t another way to describe it be that “aircraft safety standards are 1-2 orders of magnitude too high”? After all, we’re happy to accept lower safety standards for other similar long-distance transportation.

Assuming for a moment that the FAA decided that was reasonable to do, how much of the cost of an aircraft or its operation be reduced and still fit inside that safety envelope?

I’m going to challenge this premise. The problem is that car crashes are only interesting when someone famous was in the car, while buses are inherently a bit safer relative to cars (more mass and better driving) and trains have pretty much the same conditions as busses, as well as being slightly more photogenic. But airplane crashes are very photogenic, which means that it’s in the best interests of the industry to keep them as low as possible. A flaming airliner hurts all of us, no matter who built it or why it crashed, in a way that car accidents don’t hurt the motor industry.
At a wild guess, I’d say 10-25% on ticket costs. A lot of the drivers for safety would still exist. I’d guess that you’d see bus/train levels in a lower-regulation society, from the major operators. The big difference is with the minor players. When planes break, it’s a lot more spectacular than a bus or a train.

Oxygen masks would go first, and could go without reducing actual safety one bit. They’ve never saved anyone. Flotation devices would probably follow. The number of people they’ve saved is pretty small. The co-pilot is too valuable to get rid of, probably. The issue is that an airplane, unlike a bus or a train, can’t just stop, so you really want someone onboard who can land the plane if things go wrong. It’s possible that we could reduce the qualifications to be one somewhat, but I also think you seriously underrate how important the psychological portion of aviation safety is. We really should get rid of those masks, but we don’t because the public wouldn’t like it.

Oxygen masks would go first, and could go without reducing actual safety one bit. They’ve never saved anyone.

And have caused the lossofseveralaircraft. Unfortunately, we need the oxygen system for the flight crew, but extending it to the passenger cabin is an unnecessary risk. One that makes people who don’t know any better, feel better, so we’ll keep doing it.

Blast it, John! You know I know that, and I was saving that for my next top-level post!
But yes, the oxygen system is a serious fire hazard, and needs to be watched very closely. Also, it’s never saved anyone in civilian aviation.

I don’t think that your first example counts as the cause of the fire wasn’t an installed oxygen system, but rather oxygen generators for them that were improperly transported in the cargo hold. A similar incident could have happened if self-contained oxygen generators that were intended for miners would have been transported on that plane.

@Aapje
No, I’m with him on this one. There are procedures to keep hazardous materials out of the cargo hold. No air shipping clerk would have accepted them from outside. But the maintenance contractor could because they didn’t have to go through normal shipping channels.

After all, we’re happy to accept lower safety standards for other similar long-distance transportation.

We accept lower standards for trains and buses because, at least in the United States, only poor people use those and that isn’t (yet) one of the places where we have taken critical notice of inequality. Would be interested in e.g. European commentary on perceptions of passenger rail safety there. We accept lower standards for automobiles because the lower standards apply only to average drivers whereas I myself (for ~3E8 values of “myself” not actually including John Schilling) am a superior driver who will steer clear of any crash. Airplanes, most people have to strap themselves into and trust their fate entirely to some random collection of strangers.

This empirically has a significant effect on perception of safety, which is more important than actual safety when it comes to selling tickets or buying public confidence.

Here’s what I’ve gathered about American rail safety regulations: The view of the Federal Railroad Administration is that trains are for carrying freight, and the handful that carry people instead are an afterthought. This leads to passenger trains being required to withstand collisions with freight trains, which forces them to be much heavier than their counterparts in countries with functional rail systems. So Amtrak can’t just buy the same model that the Shinkansen or the TGV uses and plop it down on the Acela, they need to have a custom design up to FRA standards, which is much more expensive to build and operate and it’s stuff like this that ensures that Amtrak will be an unprofitable ward of the state forever.

Well, I could grant the crashworthiness requirements for the Acela might be excessive, because it runs on it’s own tracks. However, not having the rest of the passenger trains in the US designed for a collision with a freight train would be absolutely fucking bonkers.

The only train most intercity trains in the US would collide with is a freight train, because there’s probably like 30 freight trains passing a particular point for every passenger train. I’m sure there’s a time and stretch of track somewhere in the US where two subsequent trains carry passengers and therefore they could collide with each other, but I don’t know where that would be offhand.

Amtrak owns 730 miles of track for the 21,300 miles of route it serves. The rest is owned by freight railroads for their own operations, over which Amtrak has trackage rights.

This was my understanding as well, and it really seems foolish to me; surely making crashes less likely can be done in ways that are both cheaper and save more lives than trying to make crashes more survivable.

This was my understanding as well, and it really seems foolish to me; surely making crashes less likely can be done in ways that are both cheaper and save more lives than trying to make crashes more survivable.

You, a senior bureaucrat at the FRA, decide to implement this. You have an exquisite plan to make sure that the total death toll is lower, even though the trains are less crashworthy. And two years later, there’s a crash. At congressional hearing, grieving families of the victims testify that if the train had been built to the old standards, they’re sure their loved ones would have survived. (This is probably total nonsense, but you can’t say that too loudly.) You’re quickly cast as the heartless bureaucrat who killed those people. Never mind all the people you saved by improved signalling or whatever. You can’t produce them because you don’t know who they are. You’re forced to retire in disgrace.
The problem with doing this kind of safety work is that you have to keep one eye on how it’s going to look to the public. You can’t make decisions that look too much like trading against safety, or you open yourself up to an absolute pummeling in the court of public opinion.

Comparing air travel to car travel is a bit like comparing roulette to blackjack. With car travel — as in blackjack — skill and attitude make a more significant difference in your odds of a good outcome.

What are the odds of dying on the road if you are a skilled and careful driver? Perhaps it’s comparable to those of flying.

I don’t doubt you can improve your odds by being an unusually skillful and careful driver. But commercial air travel is 100 times safer than car travel mile-for-mile. That’s a long way to go, even with elaborate care.

Is it possible? And if so, what would it take? These are interesting research questions.

I don’t have a study to back it up, but in my general observations, serious car crashes are almost always the result of multiple things going wrong at the same time. It’s quite plausible to me that something on the order of 1 in 100 fatal accidents kill a driver who was was doing literally everything right and did absolutely nothing to contribute to his death. Probably just wearing your seatbelt religiously cuts your chances of death in half; and I wouldn’t be surprised if driving a late model mid-size car with all the new-fangled safety features cuts your chances in half again.

Edit: I was just Googling some safety statistics and it seems that commercial air travel on American carriers has gotten shockingly safe over the last 5 or 10 years. I suppose that all the fanatical attention to safety has been paying off. So I would guess that even an ace driver who was very careful would have a hard time coming close to the safety level of an inter-city airplane trip.

I don’t have a study to back it up, but in my general observations, serious car crashes are almost always the result of multiple things going wrong at the same time.

The same is true of airplane crashes, and I do have the studies to back that up. That’s my next post on safety, actually. (Not quite ready for tomorrow, but maybe next week.)

Edit: I was just Googling some safety statistics and it seems that commercial air travel on American carriers has gotten shockingly safe over the last 5 or 10 years. I suppose that all the fanatical attention to safety has been paying off. So I would guess that even an ace driver who was very careful would have a hard time coming close to the safety level of an inter-city airplane trip.

It’s been a bit longer than that. The last crash where everyone died on a mainline airliner of a US carrier was November 2001. There’s probably a couple of factors involved. One is that the fleet composition started to seriously change at that point. Another is that I do think the safety effort on the part of the manufacturers was seriously stepped up, at least based on what I saw. (Revising the 20-year-old problems was always a major pain, because of how much more rigorous we have to be now in our writing standards.) But there are a lot of moving parts in any air crash, and we’re certainly not perfect. To give one example, we got spectacularly lucky with the ‘Miracle on the Hudson’. The crew did a fantastic job, but the stars also aligned to make it possible for them to ditch instead of plowing into Manhattan. We could have a US-registered 777 go down tomorrow due to a bunch of small, stupid errors, and it wouldn’t really change the overall safety picture.

Quite possibly, it’s true of most catastrophic failures which involve technological artifacts. But from the perspective of an air traveler, it’s not very relevant to this discussion since he has little to no control over any of those factors. The best he can realistically do is choose an airline with a good safety record. (Query whether other factors, such as good weather, time of day, etc. make a difference?) For a driver, it’s a different story because he has a good deal of control over his personal link in the chain.

The best he can realistically do is choose an airline with a good safety record.

There’s nothing to choose from in terms of safety between the vast majority of western airlines. Qantas famously hasn’t had a jet fatality, but looking at their record, they appear to have just gotten very lucky. Several incidents they’ve had could have easily caused fatalities. The only airline I might avoid on safety grounds would be Allegiant, and they’re still safer than driving.

Query whether other factors, such as good weather, time of day, etc. make a difference?

Weather obviously does, but that’s not something you can directly control. Areas with lots of bad weather should be avoided on booking due to risk of operational disruption, not safety hazards. I don’t expect time of day will do too much these days. Crew rest rules are strict, and instruments are good. Daytime is probably a little safer, but not enough so to be worth worrying about. I’d recommend the airplanes I worked on over the competition (there is a different design philosophy with respect to cockpit automation), but I have absolutely no problem flying their airplanes. You can probably guess which one based on where I’m from, but I’m not going to say explicitly. I know John Schilling has some thoughts on the subject.

In terms of pilot quality, the US carriers have a slight edge over the rest of the Anglosphere, which has a modest edge over the rest of Western Europe, which in turn are significantly better than the rest of the world(*), due to the relative size of the military and general aviation community. In the US, at least for the moment, the major carriers can have their pick of pilots who have spent a couple thousand hours flying real airplanes in environments where dealing with anomalies is a regular occurrence. The farther you get from the home of the USAF, USN, Cessna, and Beechcraft, the more likely you are to find someone whose flight experience consists almost entirely of watching over an autopilot, and there’s only so much a flight simulator can do to make up for that.

If you need a Chesley Sullenberger or an Al Haynes, instead of theseguys, the US is where you will most likely find them.

But I agree with bean that pretty much all the first-world carriers are equally good at implementing and executing rigorous procedures for making sure you rarely have to depend on pilot experience. I don’t balk at flying Asian carriers for convenience, and will often chose European ones for the expectation of better service.

If you are going to fly (Anglo) American in the name of pilot quality, bean’s loyalty to his employer is not misplaced. Airbus, simplistically speaking, does automation better than Boeing, but Boeing is better at cockpit human engineering. If you care about the pilot, give him a plane designed for pilots.

And if you step outside the world of scheduled airline service, pilot quality becomes a much bigger deal because A: there’s more variability and B: the procedures can’t be as rigorously defined. The thing you want to look for in a pilot, beyond raw experience, is a willingness to cancel or abort a flight if things aren’t right. Obviously that can be tricky to determine on the day things are going right and you do get to fly, but about half of fatal general aviation accidents involve continuing a flight after obviously adverse conditions (weather, fuel) had developed. Also, stay away from any pilot who engages in unnecessary low-altitude maneuvering.

* Excluding wealthy city-states like Singapore and the UAE which can just hire veteran US/UK pilots as needed.

What are the odds of dying on the road if you are a skilled and careful driver? Perhaps it’s comparable to those of flying.

As Johan points out, two orders of magnitude is a lot. To get to that level, I guess you’d have to only drive less than 30 miles an hour over short ranges on weekdays when the weather is good and it’s not rush hour, and be skilled and careful. As a replacement for air travel, I’d tend to guess you’re practically going to be looking at 10-30x as dangerous no matter what you do. Buses have decent drivers and a lot of mass, and operate mostly on the highways, and they’re still an order of magnitude more dangerous than airplanes.
I think the relevant point is John Schilling’s. People like being in control, and everyone thinks they’re an above-average driver. But the perception of safety isn’t the same as actual safety, to the point where the majority of American casualties of 9/11 died as a result of choosing to drive instead of fly.

One of the things that make it hard for even a careful and well-trained driver to reach airline levels of safety is the drivers around him. A commercial airline pilot operates in an environment where virtually every other operator is a highly trained professional. Even the amateur yahoos that are found around smaller airports have Private Pilot licences that are significantly more difficult to get than driver’s licences for cars. The superior driver, in contrast, has to deal with all the average-to-poor drivers around him. It just isn’t a fair fight.

The superior driver, in contrast, has to deal with all the average-to-poor drivers around him. It just isn’t a fair fight.

Agreed, but on the other hand if you are in a plane which suffers a sudden and catastrophic mechanical failure, you are in deep trouble. If you are driving a car and the engine or transmission suddenly fail, you can normally just coast to a stop.

Also, if you are driving a car, you don’t have to worry about a stranger smuggling explosives into your car so that he can blow it up.

Granted, when you fly, these risks are pretty small but I think it’s part of the reason why it’s easy to intuitively overestimate the risks of flying. In regular life, machines fail on a regular basis. Also, when you go through airport security, the screeners (at least in the USA) seem to be stupid and incompetent and more interested in humiliating someone’s 60 year old grandmother than in identifying actual threats.

My Favorite Sister [her choice of title], who is the airport geek in the family, has agreed to write one or more columns on the subject. Not sure when I’ll get them, but another thing for you guys to look forward to.

A professor of political science named Bruce Gilley published an article called “The case for colonialism” in Third World Quarterly, a respected academic journal. This happened 2 weeks ago. The scandal has been brewing since:

– Full text of the article (alternative link). Chief claims: colonialism has usually been a positive influence compared to likely alternatives; anticolonial movements and sentiments in Third World countries have caused and continue to cause huge setbacks in development and well-being of their citizens; we should encourage Western re-colonization of parts of such countries on a voluntary basis, with the cooperation of their governments. An example of such possible arrangement is sketched out for Guinea-Bissau.
– This is a “Viewpoint” article; I think it means that it’s meant to be closer to a polemical essay than a careful scholarly article; despite that, it’s still supposed to undergo double-blind peer review.
– There’s been a change.org petition to retract the article and apologize, ~7k signatures.
– Some editors of the journal threatened to resign if the article is not retracted.
– The editor-in-chief published a response to these threats and retraction demands claiming the article underwent proper peer review and that provocative Viewpoint articles are explicitly part of the tradition at this journal
– 15 members of the editorial board have resigned today, claiming among other things to have some evidence that the peer review process has not been properly followed.
– Nathan Robinson writes at length on why Gilley’s paper is morally odious and in fact tantamount to Holocaust denial, but is wary of attempts to have it retracted and does not support them.

“All right, but apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, a fresh water system, and public health, what have the Romans has colonialism ever done for us?”

Enough to understand that public service provision wasn’t the notable feature of actually existing colonialism?

Edited to add a bit more: in news that shouldn’t shock any observer of institutions, colonial governments were concerned with the people and powers they were accountable to, and with the welfare of subject populations only secondarily or to the extent that it decreased unrest at affordable prices.

Colonialism wasn’t “some foreigners come run a country as a hobby project”, it was “a metropole with an agenda of domestic economic development figures out how it can bend a foreign territory to that end.” Cash crop quotas, trade and production restrictions, corvee labor and other near neighbors of outright slavery on occasion; the governing goal was to get rich and trade on favorable terms.

Rome is actually an interesting contrast, because Rome was interested in incorporating territory into the parent state in a way that colonial governments generally weren’t (France is the border case here, but in practice functioned like a fairly typical colonial power in most places, and while we could talk about Algeria at length I’d argue it doesn’t really align with the Roman model.

Again, this shouldn’t come as news to any structuralist. And that’s before we get into the pathologies of how people act when they wield power without local accountability over people they’ve been told are their natural inferiors, etc.

To put it in a non-condescending way: we all like colonialism here because it’s contrarian and anti-nationalist, and we also hate mercantilism because it’s stupid. But the notable feature of colonialism was mercantilism. Even Great Britain (a fairly enlightened state) made it illegal to pick up salt off the ground in India, so as prevent competition with metropolitan manufacturers.

And this isn’t an accident: of course a government with control over people A and B, but which is only accountable to A, is going to interfere in markets to support A.

(That said, I’m not sure why your venerable joke in particular was the prompt for all this.)

yeah, I snapped at this because it looked like a sort of lazy contrarianism that ticks me off around here, but I may have misunderstood the thrust of your comment (I caught the Life of Brian reference, but my point was that colonialism tended not to be particularly concerned with providing “a fresh water system, public health” and the like.)

@.
Yeah, exactly. I don’t disagree with any of that, or with Rob’s edit. IMO the quote is funny in part because despite all those things they’re still right to rebel, but that may be an uncommon interpretation.

Natives absolutely are right to want to cast off exploitative mercantilism/imperialism. They’d just be fucking idiots to also throw away all the nice institutions and infrastructure.

This is interesting. Why didn’t later governments try the Roman model?

These are guesses:

Unlike Rome they had treaties with many of their powerful neighbors, and unlike Rome they couldn’t easily annex land close to them – their colonial empire wasn’t visiting from next door. Also unlike Rome they didn’t have a recent historyfounding myth of being a colony themselves.

From wikipedia’s colonization article:

In North Africa and West Asia, the Romans often conquered what they regarded as ‘civilized’ peoples. As they moved north into Europe, they mostly encountered rural peoples/tribes with very little in the way of cities. In these areas, waves of Roman colonization often followed the conquest of the areas.

@anonymousskimmer, Nancy I don’t know the answer to this, but I’d be interested to know how much of the answer in Rome’s case was “military necessity”. By the time of the Social War Rome needed at least some of its Italian client states to support it in order to hold everything it had, and granting political rights to the rest of Italy was a key strategy for ending that war.

That seems like it could have created a template for Rome’s future expansion, but I don’t know enough about how the provinces were incorporated to say whether that’s the case. It does seem relevant, though, that imperial Rome was drawing heavily on its provinces for military manpower.

Enough to understand that public service provision wasn’t the notable feature of actually existing colonialism?

My study of history has led to me taking note of several features of actual colonialism, and Roman imperialism. If you think “notable feature of colonialism” is singular, maybe it’s your own education that needs work.

The intent of colonialism was in most cases to make the colonists and their imperial masters rich, while providing the colonized with the benefits of civilization and Christianity and depriving them only of savagery and gold. Colonialism as actually practiced didn’t exactly live up to that standard, in part by seeing rather more savagery and less civilization than was actually present in the areas to be colonized, but if you only consider people’s intentions to be “notable”, then the colonialists come out looking like arrogant, selfish good guys.

And if you want to count the actual effects, regardless of intent, then things like the enduring civic institutions that promote good government long after the colonists are gone, do seem kind of notable to me and ought to be included on the ledger along with all the costs and harms.

If instead you want to count all of the harm but only the intentional good, then Python’s snark was aimed squarely over your head and for our amusement.

@John Schilling: I think it is reasonable to say that the notable feature of colonialism as opposed to globalization-as-usual is mercantilism[1]. (Mercantilism has advantages: it makes development overseas less threatening to elites at home, which gives them incentives to promote development up to a point. So I’ll admit that me calling mercantilism “dumb” was unfair.)

Many positive effects of colonialism come from international trade, free-er movement of people, and export of good institutions. But we know how to do a lot of of that stuff without empires.

Exporting institutions seems the hard one, and here colonialism might be effective. But there are less dangerous ways to export institutions, such as writing books about them or (speculatively) ideas like the Zede scheme in Honduras.

I also have no idea how to measure the success of colonization here. Vietnam in 1945 was better than Vietnam in 1880, but that’s not the relevant comparison. Russia in 1988 had better institutions than Russia in 1922; what do we conclude about communism from this? The closest thing to a natural experiment I know is Thailand, which did just fine without being colonized.

[1] By this I mean “regulation of trade to support and protect high value-added industries at home”

But are we motivated to do that stuff without empires? Seems like Africa post-colonialism is showing that the developed nations only have a minor charitable interest in developing those areas. Unless, of course, there’s some natural resource for their international conglomerates to exploit. And I’ve often heard that type of situation referred to as ‘neo-colonialism’

So was it a mistake for the US to have a revolution, and should they instead have remained as part of the British Empire? After all, all the civilisation the fledgling American state had came from the Mother Country.

I missed the part where the British Empire ever gave us any good wine. And y’all know what we did with the tea you sent in its place. The rest, what the British had built we mostly kept for ourselves after the revolution. “Rebellious” only sometimes means “Stupid”.

Adam Smith argued that the best solution to the problem was for Britain to let the colonies go but that since they weren’t willing to do that, the next best was to give them seats in parliament proportional to their contribution to the tax revenue of the empire. He then casually added that if they did that, in a century or so the capitol would move to the New World.

My memory is that his timing was about right, that by about a hundred years later the GNP of the U.S. passed that of Britain.

Neat! Did Smith foresee the colonies expanding to the Pacific or was he predicting the cisappalachian New World would eclipse Britain by itself? If I understand it correctly, the British were not too keen on letting the colonies go west.

My understanding is that George III was happy to let the settlers expand, but that having paid for the Seven Years’ War, felt that he owned the new land and wanted to be paid for it. Note that George II commissioned the Ohio Company to expand, provoking (the American theater of) the War. The Company failed its end of the bargain, but the colonists recognized its claim to the land, roughly speaking.

I thought Smith had the “give the American colonies seats in Parliament” option as his first choice, due to the level of British public debt and the extra tax income he thought that option would bring. But maybe I misremember, or maybe he had different views at different times. In any event, I’m inclined to think that would have been the best option myself. Nor am I certain the revolution was right regardless; who knows what would have happened otherwise, but the thirteen colonies eventually ending up as part of Greater Canada, which seems like at least one plausible outcome had the revolution failed or never happened, doesn’t sound too terrible to me.

Oh, and on Gobbobobble’s question, it occurs that since the American colonists were keen on expansion, if they’d had representation in parliament, policy would inevitably have been pushed in a more expansionist direction.

Actually, let’s make this a tradition. Keep a petition to shut down the petition site on the front page of all the major ones. I sincerely think this would demonstrate the spirit (and the limits) of democratically directed change better than just about anything else they could do.

Did you read the article, or are you just making this argument on principle?

The Nathan Robinson piece that Anatoly linked makes a strong case for the claim that this is a shoddy paper that doesn’t actually grapple with any of the hard questions, and is instead deliberately crafted to provoke outrage so that Gilley can portray himself as a martyr for free speech:

I go into this level of detail because I think it’s crucial to show that Gilley’s article is not a serious work of scholarship. […] I expect Gilley wants the following to happen: people will be outraged. They will call for the article to be retracted. Then, Gilley will complain of censorship, and argue that lefties don’t care about the facts, and that his points has been proved by the fact that they’d rather try to have his article purged than have to refute its claims.

Did you read the article, or are you just making this argument on principle?

Principle. I tend to think colonialism wasn’t as bad as it’s commonly made out to be, and also dislike calls for rejection of papers on ideological grounds.
And even if Gilley is in fact playing the free speech martyr angle with a bad paper, that doesn’t mean that he’s not correct about how the left is going to respond.

I expect many of the people who signed the petition did not read the article. But none of those people (to the best of my knowledge) are SSC regulars, from whom I might have hoped for better.

I think Nathan Robinson’s piece (which does not support a demand for retraction) has the correct response.

I think Bean had the better approach. I have now skimmed both the article in question and Robinson’s response. My impression is that the article in question did indeed focus only on the good parts of colonialism, and missed some really bad parts of it. But Robinson’s response only talked about the bad parts, so it wasn’t really any less biased. Heck, both pieces were short essays. They were really polemics, not scholarly attempts to cover the field.

The difference between the two pieces was the tone. Gilley’s piece was contrarian — he wanted us to consider that we got colonialism wrong. And his suggestions were all about voluntary moves by countries to go back to colonialism because they haven’t been successful on their own. I find it very dubious that this would work very well, but it is certainly worth the discussion.

Whereas Robinson’s piece was condemnatory. He called Gilley’s piece odious, and compared it to Holocaust revisionism. Robinson wants to shut down discussion, and everyone accept the usual belief that colonialism was a complete disaster for those colonized.

To me it’s a pretty easy decision which of these essays to condemn. I admit though that I went through them both pretty quickly, and might have missed some key points. So please let me know if I have mis-characterized one or both of these essays.

But Robinson’s response only talked about the bad parts, so it wasn’t really any less biased.

No, Robinson covers this:

We should observe here that this is a terrible way of evaluating colonialism. It is favored by colonialism’s apologists because it means that truly unspeakable harms can simply be “outweighed” and thereby trivialized. We can see quickly how ludicrous this is: “Yes, we may have indiscriminately massacred 500 children, but we also opened a clinic that vaccinated enough children to save 501 lives, therefore ‘the case for colonialism is strong.’” We don’t allow murderers to produce defenses like this, for good reason: you can’t get away with saying “Yes, I killed my wife, but I’m also a fireman.”
[…]
By the way, I think even committed opponents of colonialism may sometimes fall into this trap. They may feel as if it is necessary to deny that colonialism ever brought any benefits—which, as Gilley points out, even Chinua Achebe doesn’t think. Instead, it’s important to point out that building power lines and opening a school doesn’t provide one with a license to rob and murder people.

(Aside: defending Gilley’s piece as a “polemic, not a scholarly attempt to cover the field” is a weird angle when the question at hand is whether it deserved to be published in a scholarly journal. Pointing out that it’s an unscholarly polemic is precisely the point of the Change.org petition. I think it’s a bad strategy, but you seem to be conceding that their underlying analysis is correct.)

This is not the article you write if you are trying to change the minds of people who think colonialism was bad. If you are trying to convince people that there were no downsides to colonialism, you are an ahistorical liar, and the comparison to Holocaust revisionism is appropriate. (For example, estimates of the population decline in the Congo under Leopold II range from 1M-15M between 1885 and 1908.) If you are trying to convince people that the negative possible outcomes of colonialism can be avoided, you need to acknowledge and grapple with those outcomes, and explain why things will be different this time. Gilley does not do that.

This is furthermore not the article you write if you are trying to convince people to support (say) charter cities. Indeed, if I were a proponent of charter cities, I would be furious at this loudmouth coming in and trying to label charter cities as a bold new form of colonialism. None of the people who need to be convinced to make charter cities a reality are going to be swayed by the claim that hey, guys, this is just like colonialism, which I’m sure you all agree was great!

This might be the article that you write if you are trying to borrow the non-reprehensible nature of charter cities and other modern proposals, and whitewash colonialism by pretending that this was about all along. It also might be the article you write if you are trying to troll people and get on TV as a martyr for free speech. In either case, I don’t see any value in defending it.

“Yes, we may have indiscriminately massacred 500 children, but we also opened a clinic that vaccinated enough children to save 501 lives, therefore ‘the case for colonialism is strong.’” We don’t allow murderers to produce defenses like this, for good reason: you can’t get away with saying “Yes, I killed my wife, but I’m also a fireman.”

Pointing out that it’s an unscholarly polemic is precisely the point of the Change.org petition.

That’s a pretty dumb point, in my opinion. The reason I know it is more of a polemic instead of a full scholarly treatment is that it is an article and not a large book. That is, no article can cover this topic in the length of an article. The point of an article in magazines of this sort on a large topic such as this is to entice the reader to re-think a subject. I can’t imagine anybody being convinced to change their minds about anything with an 11 page article.

I don’t know this particular magazine, but I used to read a similar periodical called Foreign Affairs. FA had a lot of very dumb articles, mostly written by very prestigious authors. I doubt that the petition happened because the article was so much worse than others. Instead it was because he broached a verboten topic.

The obsession with scorekeeping in history and related fields is pretty obnoxious. Questions like “was colonialism good or bad on net” are pretty close to meaningless and unknowable and are clearly different from modern policy questions.

Is this true, or is it just something that everyone knows? I don’t remember my history textbooks that well, but they mentioned some upsides of the colonization of the United States.

You are probably right about the way the Philippines is covered in US history textbooks, but I’m not sure what authors are supposed to do there. Contemporary arguments for maintaining control of the Phillipines were I think so inflected with jingo-ism that it takes a lot more effort to read them sympathetically than you can expect from high-schoolers.

Maybe textbooks are excessively pro-Wilson, which means being excessively anti-colonial? They do tend to give presidents the kid-glove treatment.

(Specifically, the way they criticized him and discussed his failures felt personal in a way that stood out to me from all other presidents. I suspect one of the authors had very strong opinions and let them bleed into the book. Interestingly I don’t remember getting that sense of the authors’ opinions on Reagan one way or another, though, which I would have expected to follow … So idk)

I have very little to do, culturally or economically, with the members of my city and county counsels, even less to do with their voters. And even less to do with the people in my state capitol, and even less again with the people in DC.

I’m pretty far from convinced that this is a productive way to look at colonialism, but it definitely isn’t gonna be very useful to point out that if you turn your head and squint, the fuzzy outlines of colonial relationships kinda look like the fuzzy outlines of your relationship with your city council. You can do the same kind of rhetorical trick with any power relationship and it never goes anywhere.

I disagree, or rather, I think it’s gotten even more complicated than it was.

One of the social cultural problems rearing it’s head is the mirror reflection of what I described, the people in the wealthy cosmopolitan areas are complaining that the deplorables in Kansas and the knobs in Northern England have gotten too different from the people in Boston, New York, DC, London, and yet somehow their votes still count. The mobile cosmo people in the cosmo areas culturally have more in common with each other, than they do with the people in their respective hinterlands.

I will grant you that “was colonialism good or bad on net” is almost meaningless and unknowable. But, “would Africa be better served if Western governments took a colonial approach to improving Africa?” is very much a modern policy question. Gilley is trying to make a case that the answer to that question is yes. Did he overlook the really bad parts of colonialism to get that answer? Yes. Is that overlook acceptable, I would say yes because he is auguring for a better form of colonialism where all Africans are considered persons which was not the case 100+ years ago.

Why limit it to a western government? There is a non-western government with one of the largest economies in the world and interests in Africa.

I’d wager there are a number of non-western nations without colonial guilt who’d like to see what they can wrest from the grasp of African nations; I just want to point out that I can’t imagine the US or Europe joining them anytime soon, when our major zeitgiest is penitent anti-racism and the counter to that is isolationism.

I don’t think even Trump is likely to say “We’re not getting a good deal in Africa, so let’s go build some roads and take over diamond mines.”

I could maybe see some Western NGO’s morphing into quasi colonialist endeavors, but only in the hypothetical.

Skimmed the original article and the response by Nathan Robinson.
The tl;dr I got for the article was that the rate of progress under colonial rule was better than the rate of progress under anti-colonial rule. Further more he argues, it would be best for Africa if developed countries took a colonial approach again (he gives three methods for how this could work).

The tl;dr I got from the response was that the acts committed by colonist countries were so horrible that colonialism can not be supported by a cost benefit analysis. Robinson asserts Gilley had to shorten history to make the comparison favorable at all. Also Robinson says the cost benefic analysis is flawed and make it is a-kin to saying that an abused wife should stay with her husband so that their kids can go to a nice school. Further more he makes the prediction that Gilley purposefully made an outrageous claim and is going to try and play the moral high ground when he get censored.

I don’t know enough about African history to make a sure but I do think that parts of the problems in Africa are due to poor self government in part introduced after revolts against colonial rule. I think Gilley is proposing a very interesting solution to help improve Africa as a whole. I think his solution could work. It is interesting that Robinson is saying that, colonialism was so bad before we cannot do it again.

I dislike that the reaction is that the response looks like “Gilley is wrong and immoral in his statements so redact the paper.” I would much more like to see a criticism of how Gilley is wrong (Robinson kind of hand waved that). Also, Robinson doesn’t address the counter argument that if done correctly, colonialism could do all of the good that Gilley argued for without all of the really bad stuff. I think that would be the most charitable way to interpret what Gilley is arguing for. Sadly, I don’t think I will get to see anyone argue against the charitable interpretation.

parts of the problems in Africa are due to poor self government in part introduced after revolts against colonial rule.

There seems to be considerable evidence that mutual trust among the people and traditions of good government are extremely important to governments being able to function well. The colonial powers employed divide and rule tactics that produced mistrust between native groups, and aggressively destroyed any local traditions of effective self-governance. So there’s a case for colonial governments being responsible for a considerable part of how bad post-colonial governments tend to be, which suggests colonialism is not the solution (unless perhaps it is intended to be permanent). Of course, someone might say that colonialism without divide and rule and destruction of local institutions might be better, but there are reasons colonial governments used those tactics; how is it imagined that the hypothetical enlightened colonialists are going to maintain control?

The beginning of the slave trade was the selling of criminals and war captives; as slavery became profitable for coastal natives, it shifted from getting rid of existing classes of people, to expanding that class of people, so wars shifted from dispute resolution to slave-capturing expeditions.

African problems began long before that, though, with the collapse of the network of empires that had ruled it. I believe the commonly cited reason for the collapse is the depletion of readily available surface minerals, primarily gold, but it has been a while since I researched the matter, and my memory is hazy.

“No other path to democracy has had as high a success rate as having the British notice what a nice country you have, and take it over until they got bored with it.”
– J. Schilling, Slate Star Codex, 2015

That said, Robinson’s takedown seems on-target for this particular piece. You cannot argue for colonialism on utilitiarian grounds without ackowledging the costs as well as the benefits, and if you are going to name-check the Force Publique as a benefit on the grounds of martial efficiency you really kind of do have to take note of the task it was originally set to efficiently pursuing. Either Gilley is so tone-deaf as to make James Damore a model of tact and social awareness, or he’s deliberately trolling.

“Until they got bored of it” brings up a point. Is there a correlation between “how hard it was to get independence” and the shape a country is in? The worst ex-British colonies tend to be the ones where the British fought to stay; the best ex-British colonies tend to be the ones where the British recognized increasingly greater autonomy. There’s outliers, of course – the US being the obvious one.

Correlation isn’t causation – in the ones that asked nicely for independence (Canada, let’s say) the indigenous population had largely been heavily reduced and ethnic-cleansed, and replaced with primarily Europeans (who, in Canada, up until fairly recently were predominantly British in origin – Canadian national mythology has largely erased this) – the British were more friendly to some British Canadian saying “hey can we have a bit more independence” than an African subject saying “look can you fuck off; you’re only here by force of arms anyway.”

However, if there is a causative effect, perhaps it works like this: the first wave of attempts at independence might look like intellectual types saying “independence, please; here is my argument for independence” and the British saying “fuck no we have the Maxim gun and you notably have not.” Fighting a war for independence selects for post-independence leaders who were good at fighting a war, not at ruling a country post-independence – perhaps intellectuals asking for independence would have done a better job.

Possible data point: Kenya is doing pretty well, and Kenyatta (an intellectual, and probably innocent of charges of being involved in pro-independence violence) was imprisoned/exiled during the violent bits of independence. Then he came out and was put in charge, and seems to have done quite a good job.

The US was a British colony in the old sense of the term – it was a new nation created predominantly/fully by citizens of the parent state (the natives weren’t assimilated into the new nation).

These sorts of colonies have historically done very well when casting off the parent state, as they were originally established with a cut-and-paste government equivalent to the parent nation, and usually governed by internally appointed inhabitants of the new nation.

This hasn’t been done much since Phoenician and Greecian colonies a few thousand years ago.

the more I read Nathan Robinson, the more I look past the initial goodness to the underlying badness

it’s the problem of a lot of left-socialists really: if they acknowledge that certain arguments are good, they’re fucked. In this case specifically, he really has read the arguments and they are bad. But I wish he’d take on, say, the op-ed he talks about, instead of linking to an even worse site which just does a terrible job discussing it

In this case specifically, he really has read the arguments and they are bad. But I wish he’d take on, say, the op-ed he talks about, instead of linking to an even worse site which just does a terrible job discussing it

What a fascinating criteria to judge Robinson as “bad”. Also, I’m quite confused what op-ed you are talking about? Most of that piece argues that Gilley’s original article just skips a very large part of the badness of colonialism which probably should figure in any sensible cost-benefit analysis. Skimming the article again, I could not find the “op-ed” you might be talking about.

The same thing happened when conservative law professors recently published an op-ed blaming the “rap culture of inner-city blacks” for cultural decline, with one of them lauding the “superiority” of white European culture. People got upset, for obvious reasons, and students objected to having to be taught by a white supremacist. But when one of the professors went on FOX News, he declared that “there were no allegations that anything we said was incorrect.” (There were plenty of such allegations.)

This paragraph contains links to a pretty trash-tier punditry site which lays out the story thus far and calls the professors racist a bunch of times, which I don’t think is fair at all; the site attempts to defeat their arguments but mostly fails. A sample of such; they argue that, even though bourgeois norms worked great during the fifties, they didn’t work at all during the thirties. Apparently not being strong enough to defeat the great recession means that the benefits of the norms are worthless. (Though since the fifties benefited from the post WWII-boom, that might not be a particularly good time period to cite either.)

Anyways, Gilley’s original article is kind of shitty. But I think that’s just lucky for Robinson, because I could write a paper arguing, essentially, that the modern-day descendants of the colonized have benefited from colonization. This is quite possibly true and doesn’t justify colonialism or cancel out the immorality of the many atrocities under it, but a lot of lefties would get pissed and I don’t think Robinson would be able to do anything, beyond just cite a bunch of colonial atrocities and how bad they were, even though that doesn’t interact with the argument.

Maybe I’m wrong about that specifically, but the bottom line is that anyone who wants to remain in the good graces of the regressive left cannot agree with a deplorable, or something along those lines. That handicaps them tremendously; if someone like Nathan Robinson was allowed to give a little ground and still remain in good standing, he could do a lot of good work. But he kind of isn’t, and I hope you can see that.

I like to be aggravating on the internet, so my favorite question to anti-colonialists is “if you apply your definitions to the American South, and think of the Redeemers as an anti-colonial movement, do you agree that “The War of Northern Aggression” is a fair description?

Anti-colonialists can painlessly bite the bullet on that. The Redeemers were not an anticolonial movement, because the re-unified US was not an empire, because it hastened to give the conquered full democratic rights.

Similarly, anti-colonial types aren’t any more opposed to the European Union or the ICC than a random sample afaik.

because the re-unified US was not an empire, because it hastened to give the conquered full democratic rights.

The Roman empire gave people in territories outside Rome the rights of Roman citizenship, although not all of them or instantly or democratic. Does that make it not an empire?

What does democracy have to do with whether something is an empire? Suppose you have an imperialist power whose ideology includes democracy. It conquers and rules a territory and gives the inhabitants the same voting rights as existing citizens.

Further suppose either that the population of the colony is much smaller than that of the power so its people are outvoted any time the inhabitants of the colony are reasonably unified or that the power has franchise restrictions, such as literacy, which most of the inhabitants of the colony don’t meet.

The Roman empire stripped citizens of the right to elect their head of state. That’s when it became an empire.

It could be argued that the US is also an empire with respect to its territories, but these territories nominally (and practicably) have the right to become independent, and the citizens of these territories can claim full citizenship by moving to a state.

Further suppose either that the population of the colony is much smaller than that of the power so its people are outvoted any time the inhabitants of the colony are reasonably unified or that the power has franchise restrictions, such as literacy, which most of the inhabitants of the colony don’t meet.

Does that make it not an empire and not a colony?

Yes. Though in the first case it would also make it unlike any state other than a city-state which has existed in history, as it doesn’t seem to devolve any democratic/republican powers down to the local level, so it probably needs a new name. In the second case it’s an ancient Greek like city-state democracy (which is not a city-state). So I’d call it a “5th century BC Athenian ‘democracy'”.

The Roman empire stripped citizens of the right to elect their head of state. That’s when it became an empire.

I don’t think democracy or the lack thereof is a defining feature of empire. “Empire” is rule over more than one ethnically, culturally, and/or linguistically diverse groups. Whether that rule is democratic or non-democratic is orthogonal to the definition. If it isn’t, then almost all governments in history are empires.

Unless you for some reason define empire as “ruling over diverse groups AND non-democratic.” But I’m not sure what the reasoning behind that would be.

Also, we can find reasons to disqualify any democracy if we really want, e.g. we find a non-good Democracy or a Democratic Empire we just need to find the disenfranchised subgroup or unelected institution that isn’t matched in the US/EU, and poof, Fake Democracy, doesn’t count.

It is true that ’empire’ does not mean ‘anti-democratic state’. But most anti-colonialist goals would be satisfied by a multinational democracy[*]:

The ultimate goal is for the goals of the state to align with the goals of its subjects, or for the state to be unable to interfere with the goals of its subjects. This can be accomplished through (1) democracy, (2) a network of patron-client relationships, (3) fortuitous alignment of the interests of the ruling class with everyone else, (4) individual rights, (5) benign neglect or (6) other.

The problem with empire is that, due to physical and social distances involved, (2, 3) don’t work. If you can make (1) take up the slack, then empires stop being undesirable (it is not clear whether (1) ever can take up the slack). From this point of view, we can answer as follows:

The Roman empire gave people in territories outside Rome the rights of Roman citizenship, although not all of them or instantly or democratic. Does that make it not an empire?

It makes it much less an empire in the relevant sense. Citizenship rights means (4) and make (2, 3) more likely.

suppose … the population of the colony [is] outvoted any time the inhabitants of the colony are reasonably unified… Does that make it not an empire and not a colony?

Libertarians are consistently outvoted; I would not call them ‘colonized’ by statists. The reason this example feels more empire-like than the case of libertarians is that mechanisms (2, 3) might be expected to fail, since there is a geographic division.

[*] I acknowledge that this is more nuanced position than the one I first staked out.

The Roman Empire was not a colonial empire, and the current discussion is about colonialism. The Roman Republic actually acted more like a colonial empire, but policies shifted with the adoption of monarchy (producing results like the extension of citizenship outside Italy). Since the shift had to do with the differing interests of the (kind of) democratic leadership vs. the monarchy, one would expect there to be differences between colonial powers which were monarchies and those which were more democratic, but as far as I can tell other factors not present in the Roman case made a greater difference in European colonialism, and it was nearly all conducted more like the Republic than like the Empire.

1) Having some other sort of people come in and tell you how to run your country and your culture sucks. It is Righteous to resist this sort of thing.

2) Better roads, better markets, better schools, and so forth means fewer babies (and grannies, and young mothers, and midlife men) die of starvation and disease. This ain’t nuthin, and it’s particularly suspect if you’re rejecting the crap the Romans have done for the colony as a whole whilst being of the social class that will *still* be on top once the Americans/Conquistadors/Brits/Romans leave.

3) The Myth of The Noble Savage And The Evil Westerner really has a lot to answer for. The Brits (and Romans, and Americans) are far from true angels, but there are some really horrid cultural practices native to non-Western traditions. Suttee and the concept of Untouchables wasn’t invented by pale guys from a damp moldy island and forced on the poor defenseless South Asians. Any weighing of the pros and cons of colonialism needs to bear this in mind.

That I list three different things does not mean they all have equal true weight. Me, I think that “y’all ain’t th’ boss a’ me” is a vital principle, and can stand on its own. But then I’m a Southerner, so I would, wouldn’t I?

Other people, who were of a less relativistic stance, and more absolutist (and utilitarian) could be expected to hold that increasing health and lifespan and moral living is of paramount importance.

True. And I have only moderate sympathy for Brahman class Indians who bitch about how horrible it was to live under the British imperial yoke. (Being serious here – I do have sympathy, but not an unlimited well.)

Still, there is a difference between the devil you know and the weird furin sort. Or so many sorts of people seem to think.

What makes me so angry about this and a few other academic “scandals” that have been brewing lately is the disingenuous appeal to academic rigour the critics always make: as soon as the paper comes out with a conclusion people don’t like “the methodology was garbage,” “the peer review process wasn’t properly followed,” etc. etc.

The thing is, you can find problems with any published paper if you’re of a mind to look for problems (this is a bigger problem with peer review, actually–whether you get a positive or negative review is too random, and clearly has a lot to do with whether the reviewer was already predisposed to your viewpoint, jealous that you came up with the idea first, too tired and busy and feeling resentful that he agreed to write the review, etc. etc., at least in the humanities). So on the off chance a paper arguing a controversial viewpoint makes it past two reviewers, either through sheer luck or being very careful, other academics who don’t like the conclusion will always be able to claim that the academic rigour was crap if they feel like doing so.

But of course, they would give a strong endorsement to an equally (or less) rigorous paper arguing for a conclusion they liked. The formula works: “conclusion I like? Relax demands for rigour.” “Conclusion I don’t like? Turn up nitpicking demands for rigour until nothing gets through.”

The Robinson article, while it can’t help indulging in some moral grandstanding, at least has the right idea by actually taking on the arguments of the paper. It’s the isolated demands for rigour resulting in demands for retractions and calls to ignore the arguments because proper academic standards weren’t followed that bother me.

“The peer review process wasn’t followed” sounds suspiciously to my ears like “we are going to boycott the election” – it’s a last gasp attempt to circumvent a loss everyone knows is coming.

This is not to say that there *aren’t* papers whose review process has been…ummm…greased – that Lancet paper claiming a million Iraqi dead from the US intervention comes immediately to mind. But imo it’s far more common that a borderline paper gets eventually passed than something outrageous gets pushed through.

The upside of this is that everyone nowadays seems to be willing to jump on a chance to replicate. This is less useful in an opinion piece like this one, but still. Baby steps.

There is the question of what it indicates, vis a vis revealed preferences and “voting with one’s feet”, with regard to Western government and institution versus non-Western institutions, when one considers the magnitude and direction of immigration and population flows in our present world. Why should (potentially) millions of people with such a clear interest in living under systems ‘designed and run by white people’ have to subject themselves to the effort, cost, and risk of moving to those institutions, rather than those institutions coming to them?

I don’t think “run by white people” is a relevant variable. Nevertheless, nearly everybody believes that it would be better if the institutions that made the nations of the first world successful could be extended across the whole of the globe. It just that most people really doubt that it was ever the aim of colonialism to do that, rather then just use the countries of the global south as a source of plunder and cheap labor.

Why should (potentially) millions of people with such a clear interest in living under systems ‘designed and run by white people’ have to subject themselves to the effort, cost, and risk of moving to those institutions, rather than those institutions coming to them?

Part of the problem is humiliation. Consider the situation in Zimbabwe: Is it easier to believe that white farmers are getting rich by selfishly exploiting the country’s natural and human resources or that whites make better managers and organizers?

But even ignoring that, there is a bigger problem which is that white people, for all their skill in creating nice countries, tend to be extremely status-oriented. For a while now, whites all over the world have been locked into a status competition to find out who is the most progressive, benevolent, and compassionate. Taking charge of third-world peoples’ affairs, although it would be extremely beneficial for your typical third-worlder, would require an admission by the proponents of such a policy that they are among the deplorables.

When we are talking about “colonialism,” should more attention to be paid to Arab colonialism? Contrary to European colonialism, which is normally presented as a moral and practical disaster, I am constantly hearing how wonderfully the Arabs treated the Jews and other minority groups they subjugated; and that Arabs have some kind of moral right to continually occupy and dominate the lands that they invaded and colonized, even to the point of excluding those on the receiving end of their activities.

I don’t think that the people who make these claims are generally aware that Arabs used conquest as well. It seems that the narrative is: ‘those Arabs were always living there and they were so generous to let Jews and Christians live among them’.

Lol, it’s hardly a secret. But anyway, thinking about it a bit more, I don’t think you have to scratch very much at the surface to see that there’s an unspoken assumption that only white people can be colonialists and that only white people must atone for the sins of those who have been dead for generations.

Compared to the groups they subjugated in the past, Arabs are doing lousy and white people are doing well. The tendencies you describe originate from the state of the world as it is, not from abstract principles.

“I have asked the Third World Quarterly to withdraw my article “The Case for Colonialism.” I regret the pain and anger that it has caused for many people. I hope that this action will allow a more civil and caring discussion on this important issue to take place.”

unless Robinson is even worse than I thought, it was a pretty crappy argument

what is lost by its removal is principles and norms of various kinds, but I don’t think anything would’ve been lost if he hadn’t made the argument to begin with

and to Onyomi: apparently this journal has some type of explicitly leftist mission, so yeah, it’s probably not the best place to have arguments. The admittance that this is obvious is kind of sad, but not an indicator of most academic journals being screwed, just most of the left.

I listened to at least one post-1989 version of the song played on the radio one day. There might be more. I might try to dig around on YouTube or something, but I’m not sure what to search for, and “WDSTF versions” seems likely to turn up effluvia.

I was surprised that Cordelia Fine’s Testosterone Rex won the Royal Society’s prize for science book of the year. I haven’t read it, so maybe the rest of it is good, but from the science based reviews and working through the two of her math examples that PZ Myers published, I had the impression that the science (and the math) was questionable, in that most of the points she scored were against straw opponents.

Has anyone read it? I pulled it to check the context for one of the math exàmples, but that’s all I read of it. (The math example was not good, fwiw).

Myers praised the math, while Cochran mocked the same example. But I think picking that out was just as much a straw man as what she did. Schmitt said “as many as” and should be mocked for it. I think he went on to give a more nuanced argument and it’s unfair to him to ignore that, but it right to condemn his opening. Mocking Fine for mocking Schmitt is wrong. And while she didn’t acknowledge what else Schmitt said, she went on from the tail claim to give central distributional statistics like

Indeed, a promiscuous man would need to have sex with more than 130 women just to have 90 percent odds of outdoing the one baby a monogamous man might expect to father in a year.

That isn’t the right statistic to look at, and it isn’t a complete argument, but it isn’t as stupid as looking at the tails, so it’s unfair to pretend that the she ended there. (But Myers ended with the tails, so he deserves this mockery.)

Similarly, Cochran mocks her as if she never noticed that prostitution exists. But she acknowledges the controversial nature of her claim and cites sources.

The math on that example is even sillier than it sounds – I got the book from the library, and Fine calculates the number of partners needed to have a 90% chance of having 2 children per year, apparently based on:

1) The assumption that a monogamous man “might expect to father” one child in a year;

2) 2 is more than 1.

3) Therefore, to have a selective advantage over the monogamous man, the promiscuous man must . . . actually I can’t explain the 90% confidence factor at all. Steps 1 and 2 are wrong, but at least I understand them.

It’s just bad math – it’s either wrong or irrelevant. The expected number of children per year of the monogamous man is probably less than one; all the promiscuous man needs to have a selective advantage is an expected value higher than the monogamous man (technically an expected value of grandchildren would be better, but that’s not what Fine’s looking at); and you don’t need a 90% confidence interval of having more every single year.

All in all, that math example suggests to me that Fine either doesn’t understand or doesn’t grapple with selective advantage. I personally think biological behavioral differences look very much unproven, but you don’t need a 100 children per year reproductive rate or 90% chance of having 2 children every year to have a selective advantage.

I don’t think Schmitt should be mocked, FWIW – i interpret the “as many as” as being a deliberately extreme example – nothing in Schmitt’s argument relies on or requires any kind of super-promiscuity. But I take your point that the other stuff in Fine’s book might be very good.

I had sort of categorized Testosterone Rex as “most likely a rhetorical argument with facially weak scientific analysis” and imagined that serious science bodies would ignore it as not making a credible scientific case. Assuming this isn’t a fluke, I’m wrong about at least one of those assumptions.

You’d either have a dedicated Marine detachment, or pick a junior officer to lead the detachment. How junior depends on the mission. There are some cases where the captain might lead it himself, but the XO is almost certain to be left behind in those cases. You certainly want the ship under control of someone responsible. An away party of the Captain, the XO, and a couple of other officers is only likely to happen when you’ve been invited to a party in port.

Probably not. Most of both men’s work is composed of the administrative tasks of running a ship, and that kind of thing is more easily done if everyone runs the same schedule. This applies all the way down the chain. The OOD for the night watch is reasonably trusted, but it’s easier to leave orders to wake the captain if anything goes wrong than it is to have someone senior up all the time. The captain has a cabin right by the bridge explicitly for this purpose.

> The captain has a cabin right by the bridge explicitly for this purpose.

I toured a pair of ships last weekend and that’s something I was wondering about. The captain of each, and the admiral on one, had relatively spare sea cabins by the bridge and in-port cabin in officer country. The sea cabins were much more sparsely furnished than the in-port cabins.

How much time was spent in port? Presumably not including the home port where there’d be a house of some sort. Aren’t the in-port cabins kind-of waste of space, why not just make the sea cabins nicer?

I toured a pair of ships last weekend and that’s something I was wondering about. The captain of each, and the admiral on one, had relatively spare sea cabins by the bridge and in-port cabin in officer country. The sea cabins were much more sparsely furnished than the in-port cabins.

What ships? And how were they? I’m going to have to start traveling for my ship fixes, so I’d like any reviews you have.

How much time was spent in port? Presumably not including the home port where there’d be a house of some sort. Aren’t the in-port cabins kind-of waste of space, why not just make the sea cabins nicer?

Not exactly. The in-port cabin on Iowa has a small bedroom and a big dining/living room. It’s there for things like hosting dinner parties and planning meetings. You couldn’t fit something that big near the bridge. Too many other things need to be near the bridge. Again, using my best reference, the sea cabin is tiny. Also, they make good cabins for VIPs. Roosevelt stayed in Iowa’s in-port cabin when we took him across the Atlantic to Tehran.

They were USS The Sullivans and USS Little Rock. I didn’t have time to see the submarine they also had.

I enjoyed walking through them, but it was self guided and the signs were a little skimpy. They also filled parts of the ships with what seemed like a fairly random assortment of exhibits not having to do with the ships themselves.

They were USS The Sullivans and USS Little Rock. I didn’t have time to see the submarine they also had.

Cool.
Actually, the Admiral’s quarters on Little Rock are the perfect example of why they have both cabins.
In 1963, she was flagship of the 6th (Mediterranean) Fleet. Things are fairly calm. There’s no reason the Admiral needs to be on duty 24/7, so he goes to his in-port cabin to sleep. When he wakes up, he goes to the flag plot to find out what happened overnight (which is nothing major.)
Four years later, she’s still the 6th Fleet Flagship. However, the Israelis and the Arabs are shooting at each other, and the USS Liberty was just attacked. Now, the Admiral needs to stay close to the flag plot, so he’s in the sea cabin, assuming he’s not sleeping in his bridge chair. Stuff is coming in every hour or two, and needs quick decisions, so he has to stay close.
I’ll also note that Fitzgerald’s captain was in his in-port cabin during the collision. They do use it at sea, and probably mostly use the sea cabin when they need to be close to the bridge.

I enjoyed walking through them, but it was self guided and the signs were a little skimpy.

Hmm. I probably wouldn’t mind too much, although I don’t understand bad museum setup in ships that have been around for a long time. Alabama had serious problems in this area.

They also filled parts of the ships with what seemed like a fairly random assortment of exhibits not having to do with the ships themselves.

That’s reasonably common. Ships are big, and a lot of the spaces are kind of duplicates.

I don’t think there’s a close conventional military analogy to Star Trek away missions, because the Enterprise is not usually engaged in military operations on them. (For the cases where it is, Tom Lehrer has the solution.)

The closest real-world analogy to the type of role the Enterprise is serving in over most of TOS and TNG might be something like the famous 1836 voyage of the Beagle in which Charles Darwin participated. This had a nominal goal of charting certain features and clarifying problems with existing charts, and in similar voyages scientists had served in official roles (e.g. as hydrographer or ship’s surgeon) and conducted their own research on the side. Darwin was attached less formally, essentially as a companion to the ship’s captain, William Fitzroy, who had scientific interests of his own. The Beagle’s officers did the survey work, in or near the ship, while Darwin did his zoological and geological research alone or with enlisted members of the ship’s crew.

I admit to having almost never watched Star Trek, but I don’t think that Beagle is quite the right analogy. That kind of surveying is done in areas where you have at least a vague idea of what is waiting for you. The proper analogy is probably James Cook, going out into the blank spots on the map. But circumstances change, and Star Trek is a lot more like a modern navy than one of that period. (This isn’t a necessary feature. David Drake’s RCN series does an excellent job of porting the 1700s RN into space. The Captain often goes off personally on dangerous missions, but there’s someone competent still on the ship.) I could see the Captain leading an away party, particularly if there’s something delicate to do, but he’ll have the Marines on call to bail him out if something goes badly wrong.

Despite the tagline of “explore strange new worlds”, the Enterprise doesn’t do much actual exploring in Star Trek — or at least in TNG, which is what I grew up on and what I know by far the best. Most of the away-team missions I remember from it took place in known space, and followed a formula of “our sensors/another ship/word on the street detected something weird on this planet, let’s go check it out”. Other common formulas were “this colony went dark, we need to figure out why” (that’s where I’d send the Marines) and diplomatic problems of various sorts (where sending the captain might actually be appropriate, since he’s the ranking Federation representative).

If I can call a car “new” for the first year I own it, I can certainly call a world “new” for the first generation of colonization. I do agree that for diplomatic meetings with new-ish civilizations it makes sense to have the captain at the table. But the XO stays with the ship.

Star Trek had to be entertaining television, though, so having the captain go on away missions and be in peril was all part of that.

When they did the sensible thing in TNG and never had Picard and Riker on the same away team, the complaints from some fans/viewers (at least in the early seasons) were that Picard was boring, he never did anything, he just stayed on the ship and Riker got all the action. Just wasn’t as good as Kirk, who went down to planets and got involved! That’s why they started sending Picard on more away missions in later seasons.

So realism isn’t going to be a high value for shows where drama and tension are the drivers of plots.

Another bit that Star Trek and its imitators usually get wrong is that the person who sits in the captain’s chair and gives orders when the captain is occupied elsewhere is not automatically the highest-ranking officer remaining on the ship and may not even be the highest-ranking officer standing on the bridge at that moment. There will be a list of officers who are qualified to stand watch as OOD(*), the captain picks one, and they then speak with the captain’s delegated authority until relieved.

If there’s a serious possibility of combat or other catastrophe, then as bean notes the CO or XO will almost certainly be actively commanding. Otherwise, the XO probably has a desk full of administrative tasks waiting for them (that, rather than “backup commander”, is most of why ships have XOs), and there’s probably half a dozen junior officers who will someday have commands of their own and need the experience, so the captain will rotate through them as he or she sees fit.

If combat or other catastrophe leaves the captain, XO, and the designated OOD all unavailable, then you revert to “highest rank available takes command”.

* “Officer of the Day” or “…the Deck”, sometimes the “Command Duty Officer [CDO]”, and I keep forgetting which services use which terminology; IIRC the USN and RN have it different.

USN is Officer of the Deck. I’m not sure the designated OOD would always remain in command. If he’s a junior lieutenant, he’s probably going to pass command to the Tactical Action Officer. On the other hand, you’re right that he might not pass command to the engineer, and definitely wouldn’t pass it to the surgeon.

As someone who has learned naval operations from Top Gun and JAG, but could recite most of the ST:TNG technical manual verbatim:

Is is possible for an unrelated officer to be qualified to command the ship? Could the ship’s engineer or surgeon get the required qualifications so they could take over in an emergency/skeleton crew operations/funzies?

In ST:TNG the chief medical officer got the required qualifications and stood watch on the night shift occasionally for personal reasons, leading to cases where she actually leveraged that background. OTOH, the ship’s councilor decided to do the same thing and it Always Went Badly.

In the USN, there are at least 3 categories of officer, Line, restricted Line, and Staff. Line officers are Surface warfare officers, submarine warfare officers, naval aviators, special forces, those sorts, the people specifically trained to lead and fight the ship, basically anyone who wears a uniform in TNG.* Staff officers medical, supply, JAG, and other pure support functions., the blue uniforms. Restricted line officers are in between, technical experts like engineers and aircraft maintainers, intelligence officers, and some others, the yellow uniforms

As a rule, only unrestricted line officers can command ships. It’s possible to move between the categories, but doing so, AFAIK, amounts to a career shift, like from the engineering department to sales at a big company.

* Note, the TNG colors only loosely match IRL roles, and are not even consistent in universe.

Also, no matter how wrong “Star Trek” may consistently get this, I don’t think they ever got it “Aliens” wrong. Faced with a dead colony and a high probability of combat, that group sent down an away team consisting of the commanding (and only) officer, the senior NCO, and every man, woman, and sentient robot on the ship – even though said ship still contained potentially useful capabilities like the backup shuttlecraft and the orbital nuclear bombardment weapons.

Not sure if this is the right forum for a personal finance question (longtime reader, first-time poster!), but thought I’d take a stab at it as this is a rationalist/data-driven crowd.

I’m a mid-30s guy with a relatively high-paying sales job (tho I live in an expensive coastal city), and a relatively small nest egg (let’s say in between $100k and 250k). I never made or saved much money until my early 30s when I moved in to my current career, now I feel I’m playing financial catchup. (My credit is decent and I have no debts at this point). Like most people (or maybe all people….), I would like to be more financially well off than I am. I’m also quasi-well read on modern personal finance, I don’t need any 101-level explanations etc. Also, I save quite a bit of my income annually (provided I don’t have a down year).

My question is- rather than putting everything in an index fund or real estate for moderate gains- why not continuously risk portions of my nest egg on asymmetric bets, until I find one that pays off? Asymmetric meaning the downside is limited and my upside is a 10x, 20x, 30x or more…. I don’t want to gamble blindly (“put it all on red!”), and I wouldn’t risk my whole net worth on one wager- but why not invest in speculative bets in chunks? Examples would be angel investing, options trading, etc. I fully understand the risks (losing all of my personal savings on failed bets)- however, I’m intrigued by the potential upside, uninterested in ‘average’ returns, and willing to basically be a speculative investor. Just curious if anyone else is doing this, or if there’s any literature/documentation/someone wrote a book or Medium post about doing just this (closest example I can think of is Musk putting literally all of his Paypal money into Tesla and SpaceX, if this story is true)

Anything that everyone knows is already priced in, so you don’t just need to be right, you need to be more right than everyone else. Investing in your own company can make sense: you know more about your company than most people. Playing AMD options if you are a big PC gamer could make sense, as could playing SSTI options if you are in law enforcement. Just some examples.

Asymetric bets exist but you have to pay for them: e.g. calls give you lots of upside and limited downside, but you need to pay whoever writes the call a chunk of extra money for the privilege of getting such a nice probability distribution.

Playing AMD options if you are a big PC gamer could make sense, as could playing SSTI options if you are in law enforcement.

Or not, because it gives you a false confidence. The company that makes the best graphics card is not necessarily the company who has the best stock to buy. A whole lot of “investment 101” advice centers around “don’t think your non-wall street industry expertise or casual hobby makes you well qualified to pick stocks in that area”

I don’t know if you’re the right kind of sales guy for this, but if you are, join a pre-IPO/post-traction software company that does enterprise business (and thus needs high-competence salespeople). They should give you stock options. These are high upside, and the downside is you probably get lower expected overall compensation.

But you should still earn enough to pay your bills and put money away into conventional savings so you aren’t betting the farm.

and the downside is you probably get lower expected overall compensation.

And that you’re not diversified, such that if the company fails, you’re out all of your investment capital AND your regular income stream (although perhaps you’re a good enough salesman that finding a similar job will be a trivial concern, in which case this matters less)

Have strongly strongly considered this, for sure. I think I’m a bit old for the hot startups (they prefer employees in their 20s, generally). Also, the equity thing is pretty high risk too- I’d hate to lose money on a trade, but I’d really really hate to spend 4+ years of my life working 60+ hours a week only for the company to not work out and the options to equal nothing. Which seems to be the startup scene, generally. Also, I think we’re sort of entering a bear market for tech startups

I’m 40, and have worked in startups for most of my career. The early-stage startups that only hire people in their 20’s aren’t a good idea anyway unless your financial plan is “>$10M or bust.” If you’re at least moderately interested in high six figure or low seven figure payouts, then you should be looking at places that have already achieved traction and have high dozens or low hundreds employees.

I don’t work 60 hour weeks except in rare crisis situations.

Your options are likely to be worth nothing, but your salary (in these later-stage startups that I’m suggesting) should be enough that you aren’t blowing years of your life to no reward even if the options are worth nothing. For example, I make around $200k/year in salary.

I’m in engineering/management, not sales, but my understanding is that for a B2B company, you should be able to see something broadly similar in sales.

The financial payout curves in Silicon Valley are dramatically misstated, in every direction, for every class of company, by people who should know better (most of whom aren’t lying for their own benefit.)

I would summarize it in four points:
– you dramatically underestimate the money available from large profitable companies (Google, Facebook, Apple, etc), if you are a strong and experienced person or become one, play the game reasonably well, and get 60-70% percentile lucky. This is a hard ask but not an impossible one.

– If you are especially special, that curve continues going up. Substantially. Arguably the best thing to do with a really good startup offer is to get Google to match it, not take it, even if you are an actual, not recruiter bullshit, rockstar.

– you dramatically overestimate the payoff from a successful startup, anywhere other than the 99.5% percentile of startups, anywhere other than being C-level or founder. This includes most jobs at most unicorns which successfully exit. I want to make this clear: my guess is that a very good, very successful, early but not single-digit-employee-number person at Snapchat, who sold every single share of stock on IPO day (which you probably could not do for lockup reasons; this person is probably still holding most of their stock), made $X00,000. (Value of X may vary, but gun to my head I’d guess 3.) This is wildly better than you can expect to do, liquidity wise, and is not enough to buy a house in SF. It will not change your life.

– being a founder or C-level invite at even a moderately succesful place can do very well. That has substantially more career risk than other options, and is hard to succeed and hard to obtain. (It is not hard to obtain the title “CEO of a startup”, but most people labeled so are buying the prestige of the title, not any reasonable shot at an exit. That prestige is not worth as much as they think it is.)

He’s a sales guy, not a tech guy. I’m not sure that there’s as strong a path to $250k-$500k total comp at the Big Fourish tech companies for sales guys as there is for tech guys (I really know nothing about sales in GoogAppleBookSoft).

Clearly, there are some very, very, very well-compensated sales guys out there in the general world, but I don’t really know how objectively best at sales you need to be to get that kind of comp.

I think that a reaction to a lot of startup mythology has been to somewhat underplay the amount of money that is available to people from an IPO. I made $X00,000 from a company with a successful IPO, but which did not go on to become $100B or $50B or anything, and I was a very junior employee and had an employee number in the four digits. This was, admittedly, a decade ago, and many more modern companies have left less money on the table for their employees, but if employee #75 at Snapchat with a great history there made less than $1,000,000, then that’s crazy.

– you dramatically underestimate the money available from large profitable companies (Google, Facebook, Apple, etc), if you are a strong and experienced person or become one, play the game reasonably well, and get 60-70% percentile lucky. This is a hard ask but not an impossible one.

That is very true. However, I think a lot of people currently at AmaGooFaceSoft underestimate how hard it is to start at the bottom at one of these companies these days. It’s a lot harder than it was 5 or 10 years ago. These companies hire a small city every year. You will be considered junior for years after being hired. The environment is also getting more and more political– a lot of the low hanging fruit has been plucked at these companies.

you dramatically overestimate the payoff from a successful startup, anywhere other than the 99.5% percentile of startups, anywhere other than being C-level or founder. This includes most jobs at most unicorns which successfully exit. I want to make this clear: my guess is that a very good, very successful, early but not single-digit-employee-number person at Snapchat, who sold every single share of stock on IPO day (which you probably could not do for lockup reasons; this person is probably still holding most of their stock), made $X00,000.

Just as a point of reference, I have friends who joined a C series startup and made $X00,000 on options when it went public.

So let’s talk about Snapchat. We need to define what “early” means here. Series A? Series B? I certainly wouldn’t consider series C to be “early”, so let’s just say we mean B. Let’s assume that a rockstar engineer joined snapchat at series B and got 0.05% of the company.

The company was worth 24 billion when it IPOed– let’s say it’s worth around 18.24 billion now, since it IPO’ed at 17 and is now at 13 dollars a share. Super rough math here– there are probably complexities I’m overlooking, etc.

If there had been no dilution, the engineer would now have about 9 million dollars. I’m too lazy to try to ballpark the dilution right now (it’s late). However, just as a Fermi estimate, a million or two is not unreasonable for the snapchat series B guy.

being a founder or C-level invite at even a moderately succesful place can do very well.

That is very, very true. Being a founder is very lucrative.

[being a founder or C-level invite..] has substantially more career risk than other options, and is hard to succeed and hard to obtain.

Is that really true? Failure doesn’t seem to have much of a stigma in Silicon Valley– unless, of course, you get labeled as a political malcontent or sexual offender. I think I would rather be a failed startup founder than a middle manager passed over for promotion at AmaGooFaceSoft.

The limiting form of this is buying lottery tickets (…which I occasionally do as mostly-a-joke and sort of a metarational ploy when Powerball gets to positive expectation. See also this joke.) There exist objective functions of money where this becomes a good idea, but I think you’re likely to be deluding yourself if this is a substantial part of your portfolio.

The basic answer to your question is this: If you expect a 20x payout on a win, you’re either an investing genius or rationally expect a less than 5% chance of winning. Given that even a 20x payout won’t change your life unless you’re risking substantial sums…you’re going to drain your finances pretty substantially doing this.

What complicates this more is that most people consider money to have diminishing returns: this makes the bets even worse, since they’re going to be mostly priced linearly. I don’t know if diminishing returns are actually accurate across the scale, though. I sometimes say that going from my current net worth to 20% higher wouldn’t make a difference…but that going to 10x would dramatically alter my life. But it’s hard to make this concrete.

In short: I wouldn’t, in general, recommend it, for any substantial amount of money. A lot more people go broke trading forex than become rich. Angel investing is…well, I don’t know anyone who’s gone broke doing it, but I do know people who have wasted anything from “car” to “house” money with nothing to show for it, and amateur angeling is a really bad idea because of adverse selection (you will only hear of startups who can’t get Paul Graham and other prominent VCs to pay attention, and that’s a bad signal.)

Diminishing returns on money is mostly hedonic studies, and hedonic studies are mostly bullshit, and/or pointing out that Bill Gates and Warren Buffett are not experiencing continuous, decade-long orgasms (AS FAR AS WE KNOW).

I, like you, suspect that a substantial raise won’t radically change my life but “fuck off money” would, though it probably wouldn’t make me buoyantly happy all the time.

Tribal people around the world are inherently given to the “fuck off, money” attitude, as they are materially impoverished. Yet despite their great, almost utter poverty they are by all accounts even happier than the industrialized West. One could take a lesson from them. That it is better to literally be dirt poor, in the sense that you live in mud huts, than to engage in consumerism.

Furthermore, as this graph indicates, there is correlation between wealth and reading habits on a national scale, with wealthier countries such as Japan and Korea reading less than their poorer neighbors such as China. This is no doubt because wealth facilitates wanton excess. Why do something as pleasant as reading when you could be playing stupid zero-sum signaling games with ever more costly positional goods? Or just pumping yourself full of drugs and playing video games?

I am afraid that the hedonic studies are far more correct than you could ever imagine. Wealth does not enhance life. It ruins it. To the OP: you are better off finding cheap, rewarding hobbies than engaging in the consumer lifestyle to which you are heir. The need to consume costly things has been inculcated in you since birth, but not for your own benefit. Not even advocates of libertarian society such as Jack Vance think that immodest wealth is good. Do yourself a favor and live cheaply.

Also, the whole tactic of, “Hedonic studies are true, I know because I read hedonic stories” isn’t what you’d call iron-clad logic. The hedonic treadmill, I believe, applies far more to how you describe your happiness than to how you feel.

If you haven’t figured out that the hedonic treadmill is true then you obviously haven’t contemplated it very much. Most things, from food to music, are more enjoyable if you abstain from them for awhile. The enjoyability of most things deteriorates the more you indulge in them. Rich food ceases to be much of a treat if you eat it all the time. Music, likewise, loses its edge. Practically all forms of enjoyment that capitalism provides become habituated to after prolonged indulgence. I have indulged on rich foods in the past. I no longer do so, because I realise it is pointless. This phenomenon of futility in regards to earthly indulgence has been known for thousands and thousands of years. Marcus Aurelius, Buddha, and the patron saint of capitalism himself, Adam Smith, have all remarked on it. So if you still have not discovered this eternal verity at this late point in time, I am afraid you are a little late to the party.

Here’s what my “contemplation” of the hedonic treadmill has produced: Happiness isn’t one thing. Sexual gratification isn’t the same as “laugh until you cry” reactions to a hilarious conversation or piece of entertainment, which isn’t the same as the kind of short term sensual pleasures that are apparently your sole focus, which isn’t the same as the satisfaction of seeing my daughter grow up, which isn’t the same as feeling secure in my future or looking back at my past accomplishments with satisfaction.

What hedonic studies prove is that poor people still laugh and rich people still have shitty days. Poor people can still have kids and spouses that they love, and rich people may still have depression or other various neuroses that make them miserable.

What hedonic studies are deeply unconvincing about is the idea that, yeah, sure, your life is just as good if when you or your wife get pregnant the mother and baby have a 99.9% chance of surviving the experience versus a 90% chance. Don’t worry, someone using an incredibly blunt instrument self-reported survey or dubious anthropological data has found that you aren’t really going to be any more or less sad on net if your wife dies in childbirth! And you all won’t really be more or less sad on net if your kid dies from a childhood disease! And you won’t really be more or less stressed on net if you know for five decades that you’re one misfortune from losing your house and living on the street! And you won’t really be more or less satisfied with your work if you have a repetitive service job versus an intellectually stimulating creative job.

Wealth is not sufficient for happiness, and for many kinds of happiness, it is not necessary, either. But there are huge parts of happiness that wealth makes enormously easier, and this is so straightforwardly, obviously the case that when a hedonic study fails to confirm it, the obvious conclusion is that hedonic studies aren’t very good at measuring this particular aspect of the complex and nuanced experience of “how you feel.”

But it’s politically convenient for people to ignore this, so they do.

And you won’t really be more or less stressed on net if you know for five decades that you’re one misfortune from losing your house and living on the street!

Ironically it is capitalism which spends vast fortunes on propaganda persuading people to live this kind of lifestyle, such that most people simply adjust their spending to match whatever their earnings are no matter how high that is. Hence, you find six and even seven digit income earners going bankrupt during recessions because they spend as much of their relative income as poor people do which is to say all of it.

What else you got? The death of a child? Hedonic studies show that certain tragedies, such as being crippled, actually do often result in long-term decreases in happiness, so you are arguing against a strawman here. The hedonic treadmill doesn’t purport to apply to everything, just a great deal of things.

Having kids, incidentally, is supposed to decrease your long-term happiness, not increase it. So you may be an outlier in that regard if you are experiencing things differently.

In sum, I don’t think you have named enough things–or indeed, anything at all–that hedonic studies have improperly assessed. Everyone knows that the inner workings of the human mind are complex and difficult to break down into neat theories. But just as intelligence can be loosely appraised in the form of IQ, so to can happiness be appraised and studied. It is a workable concept that provides important insight into the largely futile nature of material wealth.

Whenever you lead off with “capitalism does this,” you certainly aren’t working very hard to disabuse anyone of the impression that it’s politically convenient for you to believe in what you’re espousing.

I have kids and am married: are you? Or is reading the summary of other people’s surveys your only view into this gigantic chunk of human experience?

Having a child was very clarifying in how narrow a window hedonic studies are. On the one hand, yes, absolutely, it’s extremely clear how a bunch of your happiness goes down as soon as you have a kid. You’re tired and stressed and somebody that you love so much screams at you (babies and toddlers are what we would call “emotionally abusive” if they were adults). On the other hand, there’s a deep satisfaction, moments of incredible joy, and a whole mess of emotions that we can loosely lump under “happiness.” And my point here is explicitly not “on balance, the end result is you’re more happy.” My point is “you can’t add and subtract these things. Some subcategories and happiness are lower and some are higher and there’s no really meaningful summary.”

Hedonic studies are not like IQ tests: they have a huge additional layer of indirection. IQ tests actually test your intelligence. They require you to demonstrate intelligence. And yes, it’s a bunch of loosely related sub-scores that they add together, but the tests measure those sub-scores. Hedonic studies just ask you how happy they are. Perhaps unshockingly, an IQ “test” that just asked you how smart you are (“Please be honest!”) would do a shittier job of evaluating your IQ than a real IQ test does, and also would probably bias people towards evaluating the sub-components of their intelligence in a way that reflects the social value that people put on different intelligence subcomponents.

My understanding of the “do children make you more or less happy?” research is that different studies have produced contrary answers almost entirely because of how they define and measure happiness. Namely, some studies asked about moment-to-moment, day-to-day happiness, and others asked about overall, reflective happiness or life satisfaction (i.e. the experiencing versus the remembering self). So in former studies, children make people less happy, since they require so much energy and effort and worry, whereas in the latter studies, children do make people happier, since at the end of the day they get to look back and feel good about being parents. I think the book All Joy and No Fun discusses this, but it’s been a while since I’ve read it. But it makes sense, and gels with everything I’ve observed in people — I certainly feel more happiness on my days off (they’re more fun), but I were always goofing off and never doing anything meaningful or useful, I’d feel a lot worse about myself at the end of the day.

EDIT: oops, basically repeating what sandoratthezoo said; that’s what I get for going afk in the middle of writing a comment.

Practically all forms of enjoyment that capitalism provides become habituated to after prolonged indulgence.

I’m curious how far you are willing to carry the argument. I think most people would agree that, as a rule, the marginal utility of income is declining. Are you making the stronger claim that, over the relevant range, it is zero?

McCloskey estimates that the average per capita real income of the world is currently about ten times what it was through most of history–twenty to thirty times if you limit it to the developed world. Do you think you, or others, would be as happy with an income one twentieth what it now is?

That question won’t get you correct right answer for what you actually want to know, since we know that people get unhappy with low relative wealth, but not low absolute wealth (above a certain threshold).

But if you are correct, the solution for almost universal happiness is simple. Make sure everyone knows the figures for how high real incomes are now compared to the past and persuade people to view their relative status as relative not to the handful of people now alive but to all people who have ever lived.

@sandoratthezoo
The causation goes the other way. My politics flow from my views on happiness not the other way around.

@David Friedman
Aside from the effects modern medicine I see almost no improvements in happiness. Indeed, I see regressions in many regards. The deleterious social influences of many technologies such as automobiles, televisions, electric lights, and even central heating are abundantly clear to me. The effects of these devices are for people to be less communal but more indulging of their bad habits. Why walk when you can ride? Why spend time with others when you can bask in the sickly glow of the television set? As the television is wheeled in, the poker table is thrown out. As electric lighting is furnished, as houses grow larger, as heating is provided to all corners of the house, the social incentives for close kinship are frayed, forgotten, lost. Every advance leads to greater sensory stimulation, stimulation of the glans. But this does nothing, for all stimulation is attenuated by the pleasure centers of the human brain which brook no long term distortions to their perception. The trend of human accomplishment is for natural human life to be eroded and to be replaced by utterly futile efforts to proffer the brain what it cannot have–short of fundamental rewiring–long term increases in pleasure.

Make sure everyone knows the figures for how high real incomes are now compared to the past and persuade people to view their relative status as relative not to the handful of people now alive but to all people who have ever lived.

And the Idiot’s Obvious Retort (yr. obdt. svt. playing the part of the idiot) is that “Great, I’m vastly richer than my counterpart in the same circumstances in 1910, but I’m not living in 1910 with 2017 salary and 1910 expenses, I’m living in 2017 with a 2017 salary and 2017 expenses.”

The figure is for real income not nominal income, so it is already taking into account changes in prices. Living in 2017 with an average 2017 income in the developed world you can consume twenty to thirty times as much stuff as an average person with an average income five hundred or a thousand years ago.

The real point is that today items (especially electronics) need replacing far faster than yesteryear, so the 10-30 times richer statement needs to take this faster depreciation into account. Does it?

I do recall images of genuine medieval clothing which was patched and resown like crazy. Outside of very large families (and thrift shops), how many clothes are even hand-me-downs today? It isn’t even possible to wear patched clothing without seeming sketchy or violating an office dress code today.

My computer chair has been replaced twice in the last 10 years (the current one is about 4 years old and is still adequate, though when it breaks it probably won’t be repairable). Which is a far cry from generational chairs and stools.

Farm labor used to be something which paid you back throughout your life (i.e. children who would support you in your later years); today farm labor is a tractor and sorting machinery which is only as good as long as it lasts. And you’re still going to have children anyway, just more spaced out such that hand-me-downs get less use than in the past. And good luck feeding them salted porridge when they’ve heard about ice cream from friends and media.

You’re right….. I get it, I really do. It’s just tough to see people achieve fantastic speculative returns all around you, and sort of content yourself with an S&P 500 index fund. Just shooting from the hip- people got rich off holding bitcoin for years, investing in any number of startups (Google/Facebook/Tesla/Uber etc.- did you hear the story about the SoCal school district that invested $15k in Snapchat and now it’s worth millions etc.) The Chinese stock market in 2015 if you got out at the right time. And so on. It’s tough always being late to the party….

I’d like to see a broader theory of/overall look at speculative investments and return, haven’t really found anything out there yet

In a very general sense, this is basically the business model of Private Equity firms. So you should consider that those are the people you will be “competing against” in the marketplace, and they have teams of Harvard MBAs working 80+ hours a week on analyzing this stuff. And even then, on average, they typically earn a few percentage points higher than the S&P at best, which matters a lot if you’re investing billions, but matters fairly little to the 100k or so you might be able to throw at this.

You say you’re not interested in “average returns”, but as I said, on average, that’s about what you get. A 10x return is the same as a 2x return if you’re only 20% as likely to actually achieve it, and 80% of the time you strike out.

I would strongly recommend against this as anything resembling an “investing” activity. If you want to do it, treat it as gambling for entertainment purposes, which it more closely resembles than any sane “retirement saving” strategy.

If you want to maximize expected wealth you’ve probably got it backwards. Usually, the expected value of asymmetric bets with high upside is negative, and the expected value of asymmetric bets with high downside is negative.

People like positive skewness and pay a premium for it, and people hate negative skewness and pay premiums to avoid it. Which is why lotteries and insurance companies are profitable.

You mention options trading as something you’re thinking about – in that case what I describe is pretty well-documented. The long-term profitable strategy is selling options, not buying them.

I think studies are pretty clear that you can’t increase your expected value by including some high-risk elements in your investments – there are lots of very fat investment funds run by very smart people, and they’re out there bidding high-risk investments to something pretty close to their expected value.

People are right that if you’re comfortable with a few to several years of higher risk in your career, you probably have options there. You have some control over what skills you develop and how hard you work, and everybody needs a good salesperson.

For many years, I did high-risk investing with about 10% of my portfolio–similar order of magnitude to yours. It paid off for me–I figured that I increased my return about 1% on my whole portfolio. Here is what I did.

I invested in market stocks, of businesses I knew something about–mostly competitors of, or in markets adjacent to, my employer. I looked for places where prices were depressed, and I knew enough about the business and the market that I expected that depression to be “long-temporary” — more than 6 months, less than 5 years. Short-term trading seemed to me to advantage leveraged professionals, and I did not want to spend a lot of time on it–but investing 5%-10% of my portfolio in things that I thought might easily double in two years paid off well for me. (Then I took a job where I had access to my employer’s investment strategy, and could only invest in index funds without a lot of hassle, so I stopped.)

Rational Expectations theory implies that public information is already priced into the stocks, so to have a higher than normal expected return you need to act on non-public information, on something you know on the basis of which you are willing to bet against the world. For instance …

I bought the original Macintosh when I was a professor in Tulane Business School. I mentioned to a colleague that I was buying it, and he asked me why I didn’t buy a PC Jr. instead.

It was a reasonable question from his standpoint–they were probably about the same size. But the PC was running an 8088, the Mac a Motorola 68000, a chip normally used at the time for much more advanced multiuser machines. The reason it needed all that power was that running a graphic interface was expensive. I had been using a personal computer (an LNW80, a TRS80 Clone) for some years, had written a book on it and had seen a film about the Xerox Star which made it obvious to me how much better a graphic interface would be. On that particular issue I was an insider, not in the legal sense but in the sense of having a bunch of information which, that early in the personal computer age, few investors had. I took my colleague’s ignorance as the norm and concluded that Apple stock was probably seriously underpriced. So I bought some.

I made two other successful bets at various times, as well as a few less successful ones. The basic principle in each case was that Rational Expectations told me that I didn’t have to look at all of the relevant factors that lots of other serious investors knew, because those would already be priced into the stock. What I needed was one factor where what I thought I knew implied that the company was worth substantially more than it otherwise would be. If I was correct that other investors didn’t share my belief, hence it wasnt priced into the stock, but the belief turned out to be wrong I would get about the average return on my investment. If I was also correct about my belief, I would get a better than average return. So a “heads I win, tails I break even” bet.

Obviously this doesn’t hold if enough other investors share my belief so that the current price reflects, say, a .5 probability that the belief is true.

Sam’s description of his strategy sounds like the same general approach.

The standard (academic) finance explanation would be that the market portfolio maximizes the tradeoff between return and risk. Investors who are willing to tolerate more risk achieve their highest expected return not by overweighting to riskier assets, but instead by levering up to invest more into the market portfolio. This is pretty straightforward to prove mathematically, though of course the devil is in the assumptions.

As an individual investor, the way you’d implement this sort of strategy is probably to take out as much debt as you’re comfortable with (e.g. a mortgage), hold as little cash as you can, and invest the rest in a broad, passive index fund.

Outperformance with any other strategy would require some combination of (1) luck, (2) inside information, or (3) exposure to more esoteric risks beyond price volatility. You obviously can’t count on (1), the chance that you have an actionable form of (2) is extremely remote, and (3) is mostly a technical issue that probably doesn’t change anything in the end.

So if I were looking for above average returns, I’d figure out how to invest more money into the market instead of trying to shift what I was investing in.

I believe in the The Black Swan he suggested that far out of the money put options were under-priced because sellers were underestimating the likelihood of black swans. There was some famous investor that made a bundle of money on those type of instruments during the 2008 crash. Can’t remember the name though.

You’re probably thinking of Mark Spitznagel, some of whose funds made a lot of money in 2008.

Taleb made a lot of money from tail puts in the 1987 crash. Pricing has changed since then.

Taleb and Spitznagel made a hedge fund, Empirica, to pursue this strategy 1999-2004. The plan was to lose money most years and make it up occasionally. It didn’t work. Oddly, they didn’t manage to make money in 2001.

Spitznagel made a new fund, Universa, in 2007, after Taleb’s books provided advertising for the idea. Taleb is lightly involved. But it isn’t even clear that’s the strategy it pursues.

99% of my investments are in index funds. I put ~1% of my investments in a basket of cryptocurrencies. I can afford to lose it all, but there’s potential for 100x returns. It’s a buy and hold strategy so I won’t know how well it will pay off for at least 10 years.

My risky cryptocurrency investment is fun to follow, I won’t be upset if I lose it all, and as a bonus, if the investments do well I think it means good things are happening for my anti-state ideology. I don’t see myself making any other risky asymmetric bets and am content with the vast majority of my investments earning the average returns of the entire market.

I’ve considered investing in cryptocurrencies, but the damn nagging conscience that thinks that speculators and investors is everything that’s wrong with the cryptocurrency “ecosystem” keeps getting in the way.

Research the blockchain technology and the specifics of each crypto to see if you think there’s value there. I think censorship-resistant, permissionless value transfer holds great promise for the future. It’s hard for me to know what implementation will be best in the long-run, and each each crypto seems to have its own niche (store of value, privacy, programmability, etc.), so I’ve diversified a bit.

I don’t see how you can have a young, exciting technology without having speculators.

For a cryptocurrency to be a “currency”, imo, it should first and foremost be a medium of exchange, not a good to be hoarded. I can respect investing in a crypto company (or decentralized system) but markedly less so for buying up *coins and sitting on them.

Well it’s relatively early days for cryptocurrency, and there’s a chicken and egg problem with vendors and buyers. Vendors want lots of bitcoin users before offering that payment option, and potential bitcoin users want lots of vendors accepting bitcoin before they’re willing to buy the currency.

Cryptocurrency is also still a bit clunky to use and not user-friendly for the non-techgeek population (most people). I’ve seen the comparison that cryptocurrency is like the early days of the internet, when it was hard to use and the user interfaces weren’t seamless. Now, everyone and their grandma can use the internet and so it will be in a decade or so for cryptocurrency users.

Right now bitcoin can only handle 3-4 transactions per second, in contrast to a company like Visa that can handle thousands of transactions per second. That makes it hard for bitcoin to function like a currency. An improvement to the bitcoin currency protocol was made recently (Segwit) which will allow other improvements to be made (like Lightning Network) to give the same throughput as credit card companies. It’ll take some time to get those improvements implemented in a safe way though.

In other words, cryptocurrency is still really early, and anyone buying in is speculating to a large extent. As the networks get built out and protocols are updated, some cryptos will do well and some won’t, so most people buying in now are betting on the future potential, and so will be mostly hoarding while waiting for future improvements. Still, you can use your bitcoin at websites like overstock.com and to purchase gift cards, so it does have some present abilities to be a currency, it’s just not widespread.

3. Every problem with bitcoin is a business opportunity. See the first answer in the link above for a business that attempted to mitigate the confirmation time issue. That business has failed since the answer was written, but expect other businesses to try to fill the gap as the value-proposition becomes larger over time.

4. Other cryptocurrencies confirm much faster than bitcoin, though they have their own vulnerabilities. It’s possible that in the future these cryptos will be used for everyday low-value transactions with bitcoin serving as a settlement layer and store of value. A solution like Lightning Network will make it very easy to swap between cryptocurrencies. Also, Lightning Network itself will allow instant payments.

1. In the civilized part of the world we no longer use checks and normally use debit cards instead of credit cards, so your first point comes across to me like if someone would pitch a new car concept as being better than the horse-and-buggy.

2. Some Dutch web shops support ‘afterpay,’ which is even more risky, so you may be correct that people won’t care. Still, not all payments will be small.

3. That seems like a fully generalizable defense of anything that is shitty/broken: It’s a great opportunity for other people to fix it.

4. My layman perception is that bitcoin suffers from being a version 1.0 product. So you may be right that other variants may be better. However, a major issue seems to be that people are very worried that other variants will become obsolete and worthless, so they naturally gravitate to bitcoin.

@Aapje
1. I live in the civilized world and still pay a few bills with checks. I also use a credit card for almost every transaction I make (paid off in full at the end of each month). In 2014, credit card spending accounted for most of total card spending.

2. Most payments people make are small. When large payments are made on an infrequent basis, waiting 10 minutes probably won’t be a big deal (like the guy that just bought a house with bitcoin).

3. If confirmation times are the one thing keeping bitcoin back from widespread adoption and low confirmation times would be valuable to people, then there are strong incentives for smart entrepreneurs to find innovative solutions. Of course no solution is guaranteed, which is part of the reason crypto investments are still more highly speculative. There’s a non-zero probability my current crypto investments go to zero.

4. Bitcoin is still early, with more improvements needing to be made before it would be stable/usable for mass adoption. However, it has already proven itself to be extremely secure and unhackable over the last decade, even as the market cap has grown to $60 billion plus. Even if no further improvements to the protocol or ecosystem around bitcoin were made it still has massive value as a decentralized and secure value transfer system. Maybe bitcoin will end up just being a better version of gold rather than a better version of VISA.

As to people being worried about non-bitcoin cryptos becoming worthless, note that 9 different cryptos have market caps above a billion dollars with at least 30 other cryptos having 9 digit market caps. Lots of people are betting on bitcoin alternatives as having current and future value.

1. My tongue in cheek answer was meant to convey that I think we should aim to move forward, making something better than the best that exists. Not just aim to do a little better than mediocre technology which some parts of the world keep using.

2. Yes, but you still preferably want a system that works well for a large range of payments, not making people use many different payment methods.

3. Or not adopt cryptocurrencies and use something else. The incentive is not just to approve, but also to find alternatives.

I stand corrected. It was hacked 7 years ago and a glitch was found 4 years ago. Both got corrected, and it’s had a good security record since. Will it get hacked in the future? We’ll see…

Or not adopt cryptocurrencies and use something else. The incentive is not just to approve, but also to find alternatives.

I’m all for using whatever works best for consumers. If government fiat performs best for transactions, store of value, privacy, etc. then great, and with open competition we’ll get a chance to find out. I have my doubts that fiat will work best as a currency in the long run.

Some jurisdictions require sellers of a house where a murder has happenned or where paranormal activity is suspected (a “stigmatized property”) to disclose this to any prospective buyers. I believe that requirement stems from the perception that the new owner should know all the facts that might make the house harder to sell or less valuable. I do wonder, though, if such houses are really harder to sell per se, or whether that only comes about because EVERYBODY fears that they will be harder to sell and therefore are not willing to pay as much for them. Do you know if there have been any studies disentangling the depreciation due to the original facts from the depreciation due to the perceptions regarding the possibility of a future re-sale?

I recently viewed a house (planning to buy it, though I eventually decided not to for unrelated reasons)- the owner was selling partly because his wife had died. He was careful to assure me that she had not actually died in the house, even though it was old enough that somebody almost certainly had died in it at some point.

There have also been cases of houses where particularly infamous murders took place being demolished, or in one case, the house remained standing but the street was renamed in order to change the address.

Here’s another article about a house where a murder took place (and the victim was buried under the floor for 20 years!) selling at auction for above the guide price. Note, though, that it was at auction in the first place because it wouldn’t sell privately- and that the buyer, who said he “wasn’t put off” by the history, didn’t intend to live in it.

Actually, this may be a way of disentangling the two sources of depreciation- investment buyers not intending to live in a property will only care about the financial side of things.

(Note that England does not require buyers to be told if a murder has taken place in the house.)

Slightly related, I have a friend moving back to Japan who wouldn’t move into an apartment building adjacent to a cemetery – even though it was heavily discounted! The discount was about 100 or so dollars a month lower rent, corresponding to about 8 per cent cheaper.

I think it’s fairly reasonable for buyers of certain properties to get warned if a high-profile murder took place there, since there’s a good chance you’ll get murder tourism, true crime aficionados or ghouls of some kind turning up regularly to gawk at the Famous Death House. Being warned beforehand that “Yeah, there’s a good chance you’ll have weirdoes turning up to hang around and try and peer in the windows on February 31st every year because that’s when the slaughter happened” at least gives the buyer a chance to consider if it’s worth having to chase off people trying to chisel chunks of brick out of the walls for souvenirs every year.

Maybe a high-profile murder. But your ordinary sort of murder doesn’t attract tourists; I know of two murders in the townhouse community I once lived, and nobody ever showed up to gawk. (Actually one was ruled manslaughter, and if I ever get caught killing someone that guy’s lawyer is the one I want. )

Does anyone know what’s going on with Berkeley’s Free Speech Week? Why did the students not pay for their reserved rooms on time? Why have so many speakers on the schedule said they were never contacted?

As a utilitarian I believe morality is merely a set of rules agreed upon for the purpose of benefiting each other. Hence it makes no sense to have any form of morality that leads to worse outcomes for everyone. Any form of moralizing that does not serve its proper purpose, namely benefiting humans, is useless at best and harmful at worst.

Take the ethics of ISIS as an example. ISIS is objectively not an amoral organization. Instead it is clearly a very moral entity with a really awful sense of morality that it enforces. However its morality is worse than absolute amorality when it comes to its outcome. Similarly we can discuss the Khmer Rouge which was another really moral organization that caused lots of deaths.

We can also apply the same idea to SJ leftists. They certainly have some form of morality or at least they claim to do so. In fact devoted SJWs are very moral in the same sense ISIS and Khmer Rouge members are, namely they do believe that their way is the right way and they are making the world a better place through their morality. However what is the result of their ideas getting implemented? SJWs have been crying “racism!” for many years. However has the real problem, namely underachievement of black Americans, been solved through SJ? Or at least are we closer to the solution than before due to SJ? This is doubtful.

The same idea can be applied to the Christian and Jewish moralists. If we apply the principle of whether something is beneficial or harmful then are their agendas actually beneficial? Do they care more about following religious dogmas or do they care more about actually helping humans in a secular sense (i.e. “I pray that you don’t go to hell” doesn’t count)? I’m not sure.

The world does not lack morality. Instead sometimes it has too much of it applied in wrong situations but not enough of it when it is actually beneficial. Distorted forms of morality frequently causes cultural ossification, censorship of free speech and other forms of harm. I believe unless we need to explicitly apply or discuss morality in a rational discussion we should leave it outside the discussion.

Excessive moralism is excessive, yes, and tautological cat is tautological. Absent any way of telling what forms of moralism are excessive before they start producing mountains of skulls, this sort of statement tells us literally nothing.

I personally believe that morality has a purpose which is to help humans instead of being an end itself. Any moral doctrine that turns out to harm humans no matter how wonderful they sound is evil and should not be accepted.

There are several kinds of moral doctrines that are particularly doubtful:

First of all, moral doctrines that do not involve helping and not harming humans aren’t necessarily going to benefit humans and hence might be doubtful. If a moral doctrine exists for a purpose other than helping existing humans it is usually harmful. In particular any moral doctrine that allows ideas of a dead person to control living humans for the sake of honoring someone who is no longer alive and hence has no reason to be respected is absurd. Similarly any moral doctrine that places any weird form of honor above human lives is absurd.

Secondly, any moral doctrine that supports the supposed interests of a non-sentient collective entity over the interests of individual humans that constitute the entity is doubtful. For example any idea that the honor of an ethnic group is so important that its members should suffer and even die just to preserve the honor of an entity that can not feel or suffer is doubtful.

At last, any moral doctrine that supports some lesser value of some humans over more important interests of other humans is doubtful. There is nothing more important than a human life and in particular causing a dictator to be mad for 5 minutes should not result in a death penalty. Similarly the very idea that one can be fired for racist speech is evil because causing someone to lose a job causes a lot more harm compared to racist speech.

The quality of your investment in your house rests on two things – your ability to enjoy living in that house for itself, and your ability to sell the house at no loss when you decide to live somewhere else.

How much you enjoy the house will depend on things that you know about yourself and I don’t – do you hate stairs? Sunken living rooms? Fireplaces? Gas stoves? Large yards? Little grocery shops around the corner? You pick all these. You can and should safely ignore paint schemes and bathroom mirrors – these are dead simple to change. So are floors these days. However, cracked fountations, bad windows, and substandard plumbing should be completely avoided.

Regarding resale value – like they say, location, location, location.

1) Decide if you are going to stay there for 5+ years. If you’re not going to stay there for that long, don’t buy.

2) Find a map of the school districts for your area. Don’t buy houses in bad school districts. Don’t buy above half the median cost in a good school district.

3) If an area appears to be similar to one that would have been ‘redlined’ in the 1960’s, seriously reconsider buying there. If you’re going to stick it out and gentrify the area, that’s one thing. If your life changes (career, marriage, family) in the next ten years, you’ll be screwed. People with lots of money don’t buy in low rent neighborhoods, and at best you’ll be stuck doing the long distance landlord thing, which is a quick way to burn out the last bits of your faith in humanity.

Best place to get a mortgage…really doesn’t matter any more. Who ever it is, they’ll sell your note pdq.

Try reddit’s landlord page. They have good thunks on real estate, even though its mostly about buying to rent out.

With regards to the school district, how does it work when you are looking at buying in a district that has 100+ schools, ranging from great to awful? How do you find out if the house you are looking at would put you in one of the awful schools vs a great one? The quality of school means nothing to me personally, since I don’t have kids, and even if I decide to, they would go to the private school my wife works at. I understand the argument for resale value, but at the same time it would be hard to buy based off of a feature I never see any benefit from, especially when it could mean the difference between buying a nice house or a small condo.

My main reason for buying is that I don’t have any plans on leaving the area I currently live, and I could be paying a mortgage for less than my current monthly rent.

This. Buy in a bad school district, but near a good one. My house cost half what it would a mile down the road in the good school district, and costs half as much in taxes. (We are home-schoolers–it’s easier to home-school in a bad school district.) Yes, the resale value will be less, but I don’t think appreciation will be less.

*shrugs* As I said, if you for reals don’t care about resale value, school districts don’t matter. (The boundaries of school districts and school ratings are available on the internets.) But you should at least be aware that the ‘bargain’ house should be compared in value to houses in similar districts, and not to the structural equal in a good district.

Regarding plans – dude, it’s 2017. Leave yourself an out. You have literally no clue what’s gonna happen next year. If something catastrophic happens to your wife next year, you’ll be down her income *and* possibly access to the private school for the kids. I’m not saying don’t choose bad school district and private school anyway, but make that choice deliberately, with a plan B sketched out.

Regarding your ability to pay your mortgage with less than the monthly rent – remember that you’ll also have to pay taxes, buy insurance, and fix stuff in the house. You’ll have to fix A LOT of stuff in the house. Budget that stuff in when you’re making your plans.

I definitely care about resale value, but at the same time I also want a place I will enjoy living in, and it is probably more likely than less that I would live in this house for 10+ years.

When I say “mortgage”, I’m really saying “fixed costs”, so I’m taking stuff like insurance into account. Tax breaks as well. I do know that my upkeep costs would go up, but I figure that if I buy a house with no major issues (say foundation or roof problems), anything else I can reasonably expect myself to be able to fix without too much problem. I’ve only ever called my landlords a handful of times because I’ve always found it faster and easier to just fix issues myself.

There are obviously unknowns that could throw everything off, but that can be said for anything. We could afford the type of house we are looking at on just my salary, and if we really pinched and saved, probably just my wife’s salary too, so I feel as well insulated against loss of job type issues as I can expect to be. Even if we did have kids, and my wife no longer worked at a school, it would still be five plus years before I had to think about where I wanted to send my kid to school. So buying in a good school district would be purely based off of resale value.

Term you’ll be using a lot: PITI. Principal, Interest, Taxes, Insurance — the major fixed costs and the ones typically paid to the servicer of an escrow mortgage. Which brings up a few bits of important trivia.

One: typically when you get a mortgage, the mortgage servicer will want to be the one who pays the taxes and sometimes other fees (insurance, homeowners association dues). They set up an escrow account, which you pay into monthly (and at closing) and they pay those expenses out of. You don’t pay into it separately, it’s just that part of your mortgage payment is directed there. The servicer will periodically do an analysis to make sure your escrow payments are sufficient to cover expenses, and if they’re not, will increase them and sometimes insist that you make an additional payment to cover any shortage. There are limits to how much they can demand as a lump sum, however. In the unlikely case that expenses go down, they will reduce the payment and/or refund some money.

Normally this is no big deal, but some people don’t like it (it does tie up a small amount of cash) and will insist on a non escrow mortgage, which can make the mortgage harder to get and possibly more expensive (higher rate or points).

Point two is homeowners insurance: The mortgage company insists on it (whether your mortgage is escrow or not); it’s protecting them as much as you. You can choose who provides it. If you don’t get it, they will provide it for you and charge you for it, at an inflated price. So be sure you have your insurance lined up before closing, and keep it current.

but I figure that if I buy a house with no major issues (say foundation or roof problems), anything else I can reasonably expect myself to be able to fix without too much problem. I’ve only ever called my landlords a handful of times because I’ve always found it faster and easier to just fix issues myself.

Foundation and roof problems are not in the same category. Roofs in most areas are wear items; they’re going to need replacement every few decades. So a worn-out roof isn’t necessarily a reason not to buy a house; it merely translates into money (cost varying widely according to roof size and material, but high thousands to low tens of thousands in my area, with asphalt shingles). Major foundation problems, though, can be dealbreakers; they can be economically infeasible to repair.

Other upkeep costs you wouldn’t have in an apartment

* Landscaping. If you have a significant yard and you and your wife don’t enjoy yardwork, you’re going to be paying for this.

* Utilities: Even if you were paying for them in apartments, your house is likely bigger, and will cost more, possibly a lot more.

* Major repairs: Damage due to storms, roof leaks, drainage problems, heating and air conditioning, major appliances, hot water heater. Off the top of my head. Even if you’re handy, there’s some of this you won’t be able to do and some where the materials cost is high. (if you can do gas plumbing, refrigerant work, repair a roof, replace drainage tile, do sheet metal work, AND cut and patch concrete, just ignore this. Actually gas plumbing isn’t hard but I won’t do it inside because if my house blows up I want it to be the professional’s fault).

There should be maps or lists showing which address is assigned to which school. If your real estate agent is any good, they can find out, or you can dig on your own (by contacting the school district if the info isn’t on line).

If you don’t have kids, there’s nothing per se wrong with buying in an area assigned to poor schools, provided it’s reflected in the price. It even means less downside risk. Even if you change your mind, you have a few years to move if desired. But areas with poor schools often have other problems (such as crime, bad neighbors, or a lack of useful local businesses), so check those out.

I just got through buying my first house, and it was a process that made me anxious.

Early on, as one of the first things you do, spend at least one weekend by yourself going to open houses that are in the general area and price range to get a baseline idea of what you can expect when you narrow down your search.

Get a good realtor, one that stays involved in the process and explains things as you go.

Read over everything, and ask questions. Mistakes can happen. My realtor and I both independently spotted and questioned a mistake by my mortgage lender that the seller missed and signed off on, costing the seller a decent amount of money. This left the seller upset and uncooperative after the sale when we needed to ask for minor things.

You MUST pay for your own inspection for the foundation. Do NOT trust the foundation inspection report which the seller’s agent will give to you. Assume he hired the drunkest, blindest possible contractor to compile it, because it would help the sale. Foundation repairs can go into high five or low six digits. If you find anything too bad, you might have to walk away.

You should probably pay for a roof inspection and termite damage inspection as well. The roof is no big deal– you can probably replace it for $10,000 or $15,000 anyway. But you should know if you will have to do that. Note that all California houses will have termite damage. It’s only a question of how much you have and how many more years you’ve got.

Check the superfund sites near each property. Don’t buy a house near an airport. It is noisy, and some old prop-driven planes using leaded gas fly overhead.

1900-1950: check for lead paint, lead pipes
1945-1970: check for asbestos in acoustic ceilings, mastic glue, duct tape, and pipes

(These years are approximate– you could look up exactly when certain things were banned if you want more certainty. Or just test the damn thing if it looks fishy.)

You can regrow the skin, but not the specialized nerve endings that are erogenous (the Meisner’s corpuscle’s referenced above).

You would want to do skin expansion (it’s not regrowth because the lost tissue is lost permanently) so that the glans, an internal part of the penis, is protected from friction and drying out by clothing.

As for smegma, women produce 10x as much smegma. The difference is that their bodies haven’t been cut up, so they’re still naturally lubricated. Why did you think vaginal fluid had a whitish color? It’s not because it’s a white fluid. That’s smegma

Just out of curiosity, does the female propensity for smegma justify, in your mind, preemptively slicing off the roast beef, as it were, at birth?

I don’t think so, but hey, I’m just one voice on the internet. What do you think about applying your argument to baby girls? Genuinely curious, and thanks for starting the conversation.

My wife was a urology nurse when we adopted a boy from India who wasn’t circumcised. We had him circumcised (at about 2 1/2), because my wife had seen plenty of issues with those not circumcised. Well, she also figured that he would feel embarrassed being different from almost everyone else. This was about 20 years ago; I don’t know if the prevalence of circumcision has changed much since then.

Yes, I assume it was mostly smegma she was talking about, because she did mention those who said it was no big deal if one kept clean. But it is a benefit of circumcision. It is true I didn’t question her decision at the time — I figured her expertise was way above mine.

Stambovsky v. Ackley! One of the few cases in Property classes that really get the whole room listening! It’s easy to feel like the court shouldn’t have granted rescission based on something so silly, but I sympathize with the buyer. When you’re putting down $600,000 on a house, you want to know about possible reputational damages, and poltergeists aren’t easy for the inspection team to find.

My all-time favorite supernatural case remains United States ex rel Mayo v. Satan and His Staff. The fact that all of the various inevitable lawsuits against God, angels, demons, devils, ghosts and other supernal entities are regularly dismissed for failure to provide proper service upon them is one of the most beautiful parts of the entire legal system.

The Court may not be able to confirm or deny the power of God, but it can certainly insist that proper service be affected.

I taught for many years at a Jesuit university, and one of my law school colleagues was a (very nice) Jesuit. Whenever the weather was bad I would go into his office, complain that the South Bay was in violation of warranty, point out that he was the agent of the responsible party, and threaten to sue if the problem was not corrected.

The weather always got better.

The underlying legal argument was that past performance created an implied warranty.

Foreign ghosts who committed offenses in Iceland would not have been beyond the court’s jurisdiction, any more than living visitors who did. But foreign ghosts might have been less willing to go along with the implications of the court process.

There was also a US case a few years back that ruled that the plaintiff, despite his protestations, was deceased. That case is initially amusing but somewhat less funny given the serious ramifications it can have.

The very idea that one should take the interests and wishes of dead humans into account when making a decision is absurd because dead humans can no longer feel anything.

However this principle has almost never been applied anywhere in human history. Why is this true? There are no ancestors turning in their graves, period. There is no reason why we living humans should do anything to please them.

No, I’m not talking about inheritance. Instead I’m talking about the absurd idea that living humans should somehow care about the wishes and ideas of their dead ancestors.

The less people care about the wishes of dead humans the better off the people are. The modern Western society is one of the societies with the least amount of such bullshit and it certainly helps. However such bullshit still exists. Otherwise this “turn in one’s grave” phrase would not have existed.

I think your misunderstanding this. people who say that such and such thing would cause “the founding fathers (for instance) to turn over in their graves” are making a kind of argument from authority.

The idea that because Thomas Jefferson was the author of our national civic creed means that we should give his opinions some special consideration in political matters does not strike me as entirely stupid. Of course, when he wasn’t founding our nation he was busy f*cking his slaves, so maybe he was not a perfect moral exemplar.

Also, of course the dead have rights. When they were living they were parties to the social contract, and since we have certain preferences that extend beyond our deaths that we would wish to be respected, we ought to, within reason, feel bound to respect theirs .

In the case of founding fathers of the US I believe we do need to care. However it has nothing to do with these people themselves. Instead we should care in the sense that they were people who had a vision at least some of us share.

Why do the dead have rights? I don’t have any preference that extends beyond my death that really have to be expected at all. Nor shall anyone else.

I don’t have any preference that extends beyond my death that really have to be expected at all. Nor shall anyone else.

You seem to believe you know what people ought to care about.

One of the things I do is to create and spread ideas. One of the things I care about is that other people read and understand the writings that contain those ideas.

Does that seem odd to you, a preference I ought not to have? If not, is there any reason why I shouldn’t care whether people read and understand my writings after my death? I can’t get pleasure from it after my death, but I can get pleasure now from the expectation that it will happen after my death.

Since you are not, as far as I know, Lord God Almighty, Creator of all seen and unseen, you do not get to set my preferences for me.

See, this is the type of statement that makes it very hard for me to reconcile your other stated views on how every single person should have absolute independence from any kind of ties or obligations or submission to others, apart from the kind of basic minimum obligations set down by the state of being an employee or living under a law-ruled society. You seem to set yourself up as Absolute God-Emperor making all the rules for the future society and dude, if you really mean it that people are not obligated to pay a straw’s worth of attention to anyone else’s whims or wishes, why do you think you get to set the rules and terms other people will follow?

You make it sound like your enthusiasm for transhumanism is really an enthusiasm for the notion that you will get to program into the transhumans your set of preferences and values such that they will all behave and think as you wish people would behave and think, and that does not sound like liberty at all.

The hard-nosed functional way is to observe that while the opinions of your dead relatives, or at least those that you’ve personally known, might not have any consequential bearing on your decisions, they do have some psychological bearing — even if your grandpa is dead, your mental model of your grandpa isn’t. It’s that mental model that you’re trying to fit when you make decisions, and you can’t just make it vanish in a poof of logic, decision-making doesn’t work that way.

Relatedly, you might observe that dear departed Grandpa had more life experience than you and therefore, all else equal, might have been expected to make better decisions on average. If your model of him is saying “don’t do this”, then maybe it’s not a good idea, unless you can work out the details of why he’d say that well enough to firmly dismiss them. (Say, if he had a snake phobia and you’re considering adopting a python.)

There’s also the sketchy neotraditionalist way, which I don’t fully endorse but which might be worth considering, and which goes like this: we aren’t unmoved movers, but are vectors for social technologies we pick up from various sources. Some of them were passed down from our ancestors. And to paraphrase Neal Stephenson, the fact that your ancestors survived to reproduce is strong evidence in itself that those social technologies were stupendously badass, because they existed in a nightmarishly unforgiving state of Darwinian competition and all the ones that weren’t stupendously badass died. If you’re making decisions about whether to adopt social technologies, it makes sense to discount new ones, or ones from other civilizations that might not be as well adapted to your circumstances, accordingly.

The idea that the interests and wishes of future humans should be taken into account is equally if not more absurd. The dead can at least lay claim to thier own existence, and gave us the world we now inhabit. What have hypothetical future people done to warrant consideration?

Our actions can actually affect people hypothetical future people, not the dead. Let’s say you had X amount of money. You can either take that money and spend it on a child you don’t yet have(maybe a crib or something of that nature) or spend it on something you have no interest in but that your dead parent would have liked. Would you really suggest that the former option is inferior to the latter option?

In the present, yes. But assuming you had a child, they would have access to a crib so the effect eventually has a positive outcome. Short of resurrection, there is absolutely nothing you can do that would help the dead.

“there is absolutely nothing you can do that would help the dead” is only true if you are referring to the internal state of the dead, rather than to the degree to which the universe matches their desires. Hedonic utilitarianism rather than preference utilitarianism. The latter seems almost tautologically more sane than the former, to me. I care that my descendants will have a nice world to live in even after I’m gone, for example, and any changes to that effect have positive utility to me over and above what utility they have to those who directly experience them.

Even if we leave religion completely out of it, dead people do have utility curves, or rather they had utility curves. My utility is served by the knowledge that my will will be carried out after I’m gone, as was my grandfather’s before me, and his grandfather’s before him. Hypothetical future people other hand do not have utility curves until they stop being hypothetical.

Any argument against honoring the will of the dead is a fully general argument against honoring any obligation that can not be enforced through cold steel and steaming blood.

@roystgnr I do believe that hedonist utilitarianism is better than preference utilitarianism. People can have preferences that are detrimental to themselves. Hence preference utilitarianism isn’t really the best thing in the world.

People valuing other people too much can lead to people controlling each other. And…having dead people de facto controlling living people is even worse than other forms of people controlling each other.

So you told sandoratthezoo that you don’t object to inheritence. that raises a question. You, a rich old man, set up a will establishing that your fortune be used to establish a foundation to further some purpose. Doesn’t matter what. I get hired by your estate lawyer to carry out that will. Do I have any moral obligation to do what your will says to do?

I don’t necessarily agree with the idea of inheritance. However before we can abolish it we need to think carefully. Views of dead people have no value unless we can resurrect them. Dead people themselves are irrelevant unless something related to dead people also affect living people.

I believe in your hypothetical example the will should be respected not out of obligation to the dead person but out of legal obligation.

You promised to do so, generically and specifically and for approximately the highest standard of “promise” our society recognizes. Breaking sacred promises is almost maximally unvirtuous. It is in this context explicitly against the rules. And if you’re going to be all consequentialist about it, you have to include the consequences of diminishing public trust in the people and institutions we expect to look after our personal affairs when for any number of reasons we can’t immediately do so ourselves.

What moral system even leaves this subject to question? Yes, you have an obligation to do what you said you were going to do.

Also, what do you hope to gain by doing otherwise? The alternative to inheritance law is not that the wealth of Dead Rich People is made available for socially benevolent causes like reducing inequality. The alternative to inheritance law is that the wealth of Old Rich People gets turned into hookers and blow before they die, or into buried treasure with their favorite son maybe getting the map after Dad kicks off, or into complex business ventures that will collapse without the network of personal contacts that only the Father and Son share, or into corporate entities that will endure forever and are vaguely expected to favor the founder’s preferred interests for a while at least. Or is simply transferred in life, less the three-sigma hookers-and-blow fund. The only way the wealth is made available to your preferred causes is if the Dead Rich Person favored the same or similar causes and would have written his will that way in any event, or he is careless, unmotivated, or incompetent in planning for the future of his money (hint: he’s rich, so probably no).

Because failing to recognize and appreciate the social milieu/norms they inhabit appears to be a common failure mode of rationalism. Utilitarianism, being a fully general solution that doesn’t generalize, is just one of the more visible examples.

Hookers and blow are generally available in tax-free form on the black market, which puts limits on how much you can hope to extract by taxing a legalized version. Less illicit rich-people consumption goods are often of a form that can be easily parked or registered offshore in some more amicable jurisdiction.

Granted, the people who imagine schemes like this are a good idea tend to also be people who believe in the world communist or at least socialist revolution, so presumably there is to be a single World Government that makes sure rich people can’t escape like that. But that never seems to actually happen.

The only reason there are large black markets for cocaine, and the services of prostitutes is because those things are illegal.

In my opinion prostitution should be taxed and regulated like anything else. Cocaine’s combination of neurotoxicity and addictiveness makes me a little less enthusiastic about legalization, but at the very least we should start by legalizing the recreational use of saferalternatives, and see if they displace black market cocaine consumption.

At any rate, I doubt that most consumers (especial well-to-do ones) would be willing to accept the downsides of the black market just to avoided tax.

Granted, the people who imagine schemes like this are a good idea tend to also be people who believe in the world communist or at least socialist revolution

The single most obnoxious trait of libertarians is the tendency to call anyone who disagrees with them either a Communist, or a Fascist. The fact is that the only way you can make libertarianism sound remotely attractive to most people is to ignore the existence of the political and economic systems of every western democracy, and imply that the only choice is between your ideas, and mass murder. This is probably a sign that your ideas might be less than sound.

It’s also obvious that you don’t what a progressive consumption tax is.

There is more then one way of implementing it, but the simplest proposal is the personal expenditure tax. Under this plan each household’s total taxable consumption of goods and services would be calculated by subtracting their savings and investment, plus some large standard deduction, from their total reported income (including any money they borrowed). Consumption would then be taxed at a steeply progressive rate.

There are other forms that are more like a variation on VAT. Here is senator Ben Carden’s page advocating for one such version, and Alan D. Viard at AEI advocating another.

You just called me Communist for wanting to eliminate that Republican bete noire, the “double tax” capital gains.

Less illicit rich-people consumption goods are often of a form that can be easily parked or registered offshore

You’re going to have to explain that one. Consumptions taxes are uniquely resistant to those kind of avoidance schemes, as it is of course quite hard to offshore consumption. I suppose you could buy some very expensive real estate oversees (disguising it as an investment), and vacation there, but it seems like there would be pretty hard limits on how much income people could, or would dispose of in that way.

I think everyone is missing that currently living people have a vested interest in a will being carried out. And that often enough these still living people/entities took direct action while the decedent was still alive to ensure they’d be included in the will. (Even children take such actions, or neglect to take them – I’m estranged from my parents so there’s a good chance I’ll be disinherited from a likely mid-to-low six figure sum.)

This is likely the major reason why wills are honored (except when they aren’t).

Do I have any moral obligation to do what your will says to do?

You have a legal obligation not to screw over the named beneficiary. In this case it means every charitable cause, which is a very large group that doesn’t want to be screwed over. It especially means every charitable cause that has some degree of relation to the executor, as these are the charities most likely to get some of the money.

I leave money in my will to go to a particular good cause. It’s true that if the money is somehow diverted elsewhere after my death, that will not affect how happy I am. But the knowledge that there are institutions which will result in the money going where I want it after my death makes me happier now which helps justify the existence of those institutions on a utilitarian basis.

Think of it as a special case of the rule utilitarian vs act utilitarian issue. The rule “be bound by the wishes of dead people” makes live people better off, even if the act of following those wishes does not.

Why is it a big deal though? If you really care about someone you should get their genetic information. Once technologies make it possible to recreate such a person again you can in some sense resurrect them.

If we can somehow document all the quarks and gluons in a human body to some extant can we reconstruct a human?

So far as we know, the laws of physics prevent you from exactly copying the sate of a quantum mechanical system.

So if you mean to ask: “could we use some kind of scanner to examine someone on the sub-atomic level, and store an exact back up copy?”, then the answer is no.

Now do you need an exact copy, in the quantum mechanical sense? Since we don’t have anything like a scientific theory of consciousness; I don’t know. But it seems to me there are good intuitive reasons to suspect that consciousness, and the seat of personal identity, (the “soul”, for lack of a better word) are deeply rooted, and perhaps impossible to copy.

Imagine a machine that could exactly duplicate a human being on the molecular (but not subatomic) level. To use this machine a man would sit down in a chair, and be scanned. The classical, but not quantum, data thus gathered would be transmitted to a molecular constructor attached to a chair across the room. The constructor then builds a second man, molecule by molecule, to exact specifications. When the process is complete the second man wakes up in his chair with the exact same thoughts and memories as the first man.

If we assume that the seat of personal identity is identical with the classical information that makes up the brain, and thus the mind; we must ask a question: are these men the same person?

On the one hand it seems that their minds are identical, on the other hand It seems obvious that two different people with two different subjective experiences can not be the same person. If we prick the finger of the second man, the first does not feel the pain.

Now consider this: what if someone installed a mechanism on the first chair that killed the user the instant the process was complete? If you were to use the device, thus modified, would you survive?

The person who got up from the second chair would certainly feel himself to be the be the same person who sat down in the first one. The instant he sputtered to life, it would seem that he would pick up every thought, and feeling, right where his now dead predecessor had left off.

So what has happened, has one man been transported across the room? Or has a man been killed, and then replaced, Invasion of the Body Snatchers style. If we go with the first interpretation, does that mean that one man existed, for an instant, in two places on once? If we go with the second, must we give up the materialist view that mind, and thus the seat of personal identity, is the same as the brain?

I think the resolution to this paradox rests in the fact that the minds of the two men must differ, all be it in a very small way. After using the version of the device that allows both men to live, they will open their eyes look across the room, and have two different thoughts. One will feel that he has stayed in exactly the same place, and one will feel that he was teleported across the room leaving us with two very similar, but subtly different people. For the first man, siting in the chair with the scanner was no different then siting in any other chair, but for the second man there will seem to be a radical interruption of the normal flow of conciseness.

It seems to me that physical continuity is necessary for continuity of identity. There is no mind uploading, and no human technology could ever resurrect the dead.

Now that leaves open the question, of weather some cosmic phenomenon, could cause a mind to be resurrected, (or perhaps more precisely, “reoccur” through natural processes) in all of it’s details. On this question of cosmic immortality I’m left only with the thoughts of Arnold Toynbee.

Human nature presents human minds with a puzzle which they have not yet solved and may never succeed in solving, for all that we can tell. The dichotomy of a human being into ‘soul’ and ‘body’ is not a datum of experience. No one has ever been, or ever met, a living human soul without a body… Someone who accepts—as I myself do, taking it on trust—the present-day scientific account of the Universe may find it impossible to believe that a living creature, once dead, can come to life again; but, if he did entertain this belief, he would be thinking more ‘scientifically’ if he thought in the Christian terms of a psychosomatic resurrection than if he thought in the shamanistic terms of a disembodied spirit.

Even if consciousness does result from uncopyable quantum stuff in the brain, it doesn’t follow that a human being cannot be effectively duplicated. It just means the copy will not have continuous consciousness; perhaps the copy will have an experience akin to an electric shock, but consciousness will re-establish itself based on the copyable parts..

To quote G.K. Chesterton (because it’s not really a good Open Thread without a Chesterton quote):

Tradition means giving votes to the most obscure of all classes, our ancestors. It is the democracy of the dead. Tradition refuses to submit to the small and arrogant oligarchy of those who merely happen to be walking about. All democrats object to men being disqualified by the accident of birth; tradition objects to their being disqualified by the accident of death. Democracy tells us not to neglect a good man’s opinion, even if he is our groom; tradition asks us not to neglect a good man’s opinion, even if he is our father. I, at any rate, cannot separate the two ideas of democracy and tradition; it seems evident to me that they are the same idea. We will have the dead at our councils. The ancient Greeks voted by stones; these shall vote by tombstones. It is all quite regular and official, for most tombstones, like most ballot papers, are marked with a cross.

I don’t think the dead should be allowed to have any influence in the society other than in terms of what useful things they have done that we can use. However nobody should be obliged to obey the dead or something.

@hlynkacg My axioms do not reject the idea that sometimes humans voluntarily cooperate in the workspace and elsewhere. If you sign a contract and work then you have voluntarily accept the responsibilities of the job. There is nothing wrong with that.

The only kind of obligations I oppose are involuntary ones, especially the idea of imposing parenting on humans.

Obligations, regardless of whether they are voluntary or conditional, require the individual to submit to something other than thier own desires. It is deeply antithetical to your professed beliefs.

As John points out above, there’s a reason pretty much every moral system in the world settled on promise keeping as their central virtue. A contract or responsibility that does not bind it’s parties is less than worthless.

@hlynkacg No, it’s not. I believe in the validity of obligations one voluntarily enter into. Once a person agree to some deal they should keep it. I believe contracts and responsibility should be binding.

Otherwise why do I oppose the idea of adultery?

My absolute individualism is mostly about rejection of families and other groups one does not voluntarily get into. That’s it.

HFARationalist, I think we need more specifics to discuss the issue fully – what are some examples of rights you think we shouldn’t give dead people?

Without specifics, I’d note that some respect for the dead has a function in reassuring the living. If Aunt Mildred really wanted me to keep the family portrait on the wall of her house after leaving it to me, then one function of leaving it up is to reassure people (during their lives) that their posthumous wishes will be granted some respect.

I believe dead people and interests of dead people should be completely irrelevant unless they are somehow related to benefiting living people.

I won’t want anyone to keep my picture or do anything about me if I’m dead. If you like my ideas you can use them in whatever way you consider appropriate. You can bury me wherever you want because it won’t matter to me anyway.

We know what you believe. We don’t know how that cashes out on the object level, nor how good your reasons for believing it are.

This is borderline unkind, but you’ve said in the past that you want constructive criticism, so here: you have a bad habit of taking questions about your motivations and interpreting them as questions about your beliefs. Beliefs don’t come out of nowhere. They are motivated by something, and if you’re trying to convince people of unusual beliefs (you do, constantly), it behooves you to start with the reasons why and only move on to your conclusions once that’s firmly established. That can be done inferentially (specific, concrete things you want to see happen in the world) or analytically (start from shared axioms and work your way down), but dumping a big abstract bottom line on people is the worst of both worlds.

I assume that by motivation you mean more concrete examples or moral axioms that produce my particular moral and social beliefs. Sure! I will do it now.

I believe one key reason why West Europe edged ahead of East Europe, the Middle East, South Asia and Northeast Asia is that West Europe managed to be less authoritarian and more rational than these places. All other major civilizations and Medieval West European civilization suffered from lots of cultural ossification. Cultural ossification can be in the form of political authoritarianism-induced impairment of rational thoughts, extreme social conformism caused by ideologies such as Islam and Confucianism, extreme asceticism, violent fundamentalism (i.e. Crusades) or other similar ideas that harm a society. To protect an authoritarian regime the authoritarian leaders frequently produce or encourage nonsense that helps them maintain power. It is the populace that is actually harmed by such nonsense in addition to the harm caused by authoritarianism itself.

An important part of authoritarian unthinking caused by authoritarianism is about ancestral worship and other forms of family authoritarianism. The very idea of chaining someone to their parents from birth is completely revolting. However this is a crazy idea that is harming people in many ossified cultures such as Muslim ones. It is completely revolting that ancestors should be allowed to controlling people simply for being ancestors regardless of how immoral or ignorant they are. It is much worse if even the feelings of dead people are allowed to interfere with lives of living people if these dead people happened to be evolutionarily successful. Such tendencies can basically stop societies from ever progressing if they are sufficiently strong.
Any ideology that requires people to obey their ancestors is completely revolting..Yeah just a little bit less revolting than ideologies that kill or injure people. If a person has to unconditionally obey someone else due to something they did not voluntarily get into then this person is not really fully an autonomous person. Instead they are in some sense enslaved. The idea that this obedience nonsense has to exist for 18 years in America is disturbing enough. However the idea that this should never end in places such as Pakistan is revolting. If everyone in a society has to obey their parents including dead ones then the society does not really have persons at all. Instead everyone is just a slave chained from birth who can never be released from their slavery..or..maybe…drones. This idiotic phenomenon never exists among animals. Why did humans invent such a weird idea?

HFARationalist – thanks. Based on your examples, my response is what I thought it would be.

Your response to me offered two examples where we respect the wishes of the dead – (1) keeping a picture of a deceased relative on display (I presume that the decedent wished to keep the picture up and the resident would otherwise prefer not to) and (2) interring the remains of a decedent in the way she wished instead of the way most convenient for the living.

My first Chesterton’s fence analysis of both of those is that they are for the living. Many living people presumably gain comfort from knowing their wishes will be respected after death. If we say “Aunt Jackie’s dead, who cares that she wanted to be buried on her special gravesite when we can sell it for more and have her cremated,” then Uncle John is not going to get a lot of comfort fram the belief that his wishes will be respected.

A second argument is that keeping our promises helps to develop pathways that make us more reliable and better citizens. If you are the kind of person who will break a promise casually if you think the counterparty will never know or be affected, your promises are on the whole worth less.

None of this is to say we always respect the wishes of the dead (there’s a whole body of law about when we may ignore them) or always keep promises (ditto), but there’s often some real value in keeping Aunt Mildred’s picture up or her gravesite clean, just because she asked us to.

Many living people presumably gain comfort from knowing their wishes will be respected after death.

I would think the living gain more comfort just by keeping their emotional connection to the relationship alive by fulfilling the wishes of the beloved dead.

People who were genuinely antipathic to the dead generally have no problem in not fulfilling their wishes, or even contravening them in spite. They get a final sense of satisfaction that way (except those who have a moral qualm against vengeance like this, and thus get an emotional boost by ‘doing the right thing’).

When my father died, I was happy making some fairly significant bequests to some charities at his request, and we were happy to arrange a service and treatment of his remains that he requested.

HFARationalist is right that as far as we know, my father doesn’t gain any utility from us actually keeping those commitments that he wouldn’t gain from believing (mistakenly) that his family was going to do so. But it would still feel wrong, FWIW.

The very idea that one should take the interests and wishes of dead humans into account when making a decision is absurd because dead humans can no longer feel anything.

However this principle has almost never been applied anywhere in human history. Why is this true?

I’d suggest you’d start by examining your notion of “what is of value”: it is based on too reductionist stance, and therefore it breaks down as a model of human behavior.

An example of what I consider a very similar mistake in concept of morality: cheating on your spouse is totally a-OK if you are good enough at lying and your spouse never finds out that you cheated on them [and if they do find out, well, these people often think that the person who told them is the one responsible for breaking up the illicit relationship, not them cheating in the first place…].
Strangely enough, I don’t think it’s common that everyone who know about the situation will actively partake in maintaining the lie.

The reason is that these kind of things, caring about someone’s feelings and all that, they are not ultimately about the currently experienced feelings of the injured party: they are about caring about that person as an independent fully-fledged individual, with feelings and emotions and agency and reasons and rationalizations and secrets and things that make them happy and things they are scared about and all that, as much as you. Not just a machine that you predict will do something if you give them one input and something else with some other input, and you give them certain inputs just because some of the outputs amuse you more than others. And that’s why you should care are about what they’d feel if they knew the whole picture (or at least as well as you know it). In case of cheating spouse, “not knowing the whole picture” is about being lied to. In case of dead people, it’s about that they are not around to know or feel anything.

Another example. Consider your best friends or relatives. Someone, anybody close and dear to you and you believe they care about you. Now, imagine that after you died, they’d be totally nonchalant about it, maybe express even happiness and joy and sell off every personal item and memento you happened to gave them in your will? And they’d explain that there’s nothing wrong with it, because their acts can no longer affect your feelings. I’d feel hurt, because it would imply they really never cared about me in the first place. Of course I’m not around anymore to feel hurt, but this isn’t the important thing in the basic emotional model of human relationship conveyed by the words “care about someone”.

These is are also the exactly same emotions the plot Christmas Carol by Charles Dickens is about. (What do you think of your life if you knew what other people would do when they can to express their opinion freely because you are not around anymore?) However, it also goes both ways: what it tells about you and your relationship with the person-now-dead if you start acting like you never cared about their opinions or feelings as soon as they are dead?

This is the emotional reason my mother stores my great-grandmother’s very special “fine” china (which, truth to be told, according to my research had only modest value 100 years ago, wasn’t even then of particularly good quality, and as an antique today is not worth much anything either) and takes it from the cupboard only for some special family events.

In case you for some reason still can’t “get it”, can offer also a reason that might affect unfeeling automatons:

If you act like dead people’s opinions and feelings matter, this maintains a particular societal norm. In other words, trusting in the existence of such norm after you die, you can make wills and testaments. It’s up to you how important you think this is.

Cheating on one’s spouse in a monogamous marriage violates the marriage contract so it is indeed actual harm.

Caring about dead people’s feelings on the other hand is from a social convention that does not have to exist, namely people can somehow have influences even after they are dead and that people are entitled to such influences. This very principle is wrong unless you believe that the dead aren’t really deceased.

Once I die I don’t want anyone to cry about the fact or honor me in any way. OK. HFA is dead. So what? Am I Stalin or Hitler that people must obey my feelings? If my ideas are great use them. If they are awful feel free to criticize them.

namely people can somehow have influences even after they are dead and that people are entitled to such influences. This very principle is wrong

Great, so you’ve just done away with inheritance and property rights. Unless Grandfather Jones handed over a bundle of cash to Young Tom before Grand-dad kicked the bucket, when he leaves it in his will to Young Tom, Uncle Smithers is perfectly entitled to go “applesauce” and pocket the cash himself.

Same with “I got this farm when my dad died, and he got it from his father, who got it from his father when he settled here and improved the land”. Who cares about as far back as great-grand-father and his line of succession? If I, the family lawyer, can say “I’m taking over this land because look, the deeds are in my name” then you can go whistle. Theft, what theft? Just because your dad wrote a will leaving you the farm and expected me to put the deeds in your name, not my own? He’s dead, what do I care about his wishes?

Caring about dead people’s feelings on the other hand is from a social convention that does not have to exist, namely people can somehow have influences even after they are dead and that people are entitled to such influences. This very principle is wrong unless you believe that the dead aren’t really deceased.

You continue to fail to understand my point. I don’t posit it as an abstract principle that needs to be justified and reduced to some other principles, I claim it’s a fundamental human emotion and part of what I’d call possessing a full functional theory of mind. On a very personal gut level, it makes me sad to think about that my parents would know that I would do something $bad, because I value their opinion of me, and not just as an abstract concept of that opinion being “useful” or having utility (as some others have tried to explain by quoting Chesterton), but as deeply as the deep emotions go. What they would feel matter to me because they matter to me, and that’s more than enough justification for me.

Once I die I don’t want anyone to cry about the fact or honor me in any way. OK. HFA is dead. So what? Am I Stalin or Hitler that people must obey my feelings? If my ideas are great use them. If they are awful feel free to criticize them.

Re-reading this, I also want to stress this: I lengthly tried to explain that it’s not wanting people who are not dead to feel obligated to do something because you need it after you die, but if your current relationship with them is genuine one, it would entail that they would continue to view you and your opinions as worth of respect even after you cease to be because that’s what it means to ascribe value to someone as a person and their opinions and views and everything that being a person entails, instead of just pleasing them. For the exact same reason they should not lie to you just to make you feel good about yourself when you are alive (existence of official marriage contracts is only tangentially related to this).

On a very personal gut level, it makes me sad to think about that my parents would know that I would do something $bad, because I value their opinion of me, and not just as an abstract concept of that opinion being “useful” or having utility (as some others have tried to explain by quoting Chesterton), but as deeply as the deep emotions go.

@nimim. Do you understand why this is so? I think HFA does make more rational sense here, but I understand that you say this is an emotional thing, not rationally worked out. But emotions don’t come from nowhere. Could this be a guilt complex that your parents buried deep in you, so the guilt remains when you do something your parents would disapprove of even though they are gone? Or maybe it somehow relieves you of missing them somehow if you continue to follow their strictures?

Personally I don’t have these emotions at all. Both of my parents are dead, and it would never occur to me to follow any path because my parents thought it wise. I wasn’t all that close to my parents, but I wasn’t estranged either.

Maybe the main issue here is that I’m arelational. I’m not in any relationship (in any sense, not just romantic) with anyone else. It’s not really that I hate humans that much. I do get along with humans for non-social purposes such as obtaining information, work and having a nice rational discussion. However…that’s it. I don’t really relate to people at all even though I do help people.

You continue to fail to understand my point. I don’t posit it as an abstract principle that needs to be justified and reduced to some other principles, I claim it’s a fundamental human emotion and part of what I’d call possessing a full functional theory of mind. On a very personal gut level, it makes me sad to think about that my parents would know that I would do something $bad, because I value their opinion of me, and not just as an abstract concept of that opinion being “useful” or having utility (as some others have tried to explain by quoting Chesterton), but as deeply as the deep emotions go. What they would feel matter to me because they matter to me, and that’s more than enough justification for me.

This is something I completely don’t get. Different people have different values. This would never occur to me. Here is why: if the ideologies and emotions of my parents or anyone else have control over myself than I would not actually be a fully autonomous human being. That means I would need to be liberated from this heinous internalized oppression.

Re-reading this, I also want to stress this: I lengthly tried to explain that it’s not wanting people who are not dead to feel obligated to do something because you need it after you die, but if your current relationship with them is genuine one, it would entail that they would continue to view you and your opinions as worth of respect even after you cease to be because that’s what it means to ascribe value to someone as a person and their opinions and views and everything that being a person entails, instead of just pleasing them. For the exact same reason they should not lie to you just to make you feel good about yourself when you are alive (existence of official marriage contracts is only tangentially related to this).

If others’ weird opinions and views influence yours not for rational reasons but instead due to your relationship then your thoughts are censored. I don’t want to be unkind but this is an important reason why I will never have a relationship with anyone. I might pretend to have one but I really won’t have one. Relationships often impair reasoning.

P.S. This seems to be yet another autism-related mentality divide. I believe that what you wrote is probably genuinely legit and common for non-autists but is really weird to at least some autists.

Do you understand why this is so? I think HFA does make more rational sense here, but I understand that you say this is an emotional thing, not rationally worked out. But emotions don’t come from nowhere. Could this be a guilt complex that your parents buried deep in you, so the guilt remains when you do something your parents would disapprove of even though they are gone? Or maybe it somehow relieves you of missing them somehow if you continue to follow their strictures?

I think the main problem is that I’m trying to describe fuzzy entangled mess of ideas with a very imprecise tool called language, and I’m grasping at straws to explain why dead people would continue to matter (and maybe after the first box of explanations is empty, I started to draw inspiration from irrelevant personal psychological complexes).

But on the other hand, isn’t possessing moral intuition fundamentally related to guilt? When you do something and then you realize should not have done it, and feel terrible about it? But rethinking this, this particular pattern of thought might turn out quite unrelated to the concept that I was trying to express, anyhow.

Personally I don’t have these emotions at all. Both of my parents are dead, and it would never occur to me to follow any path because my parents thought it wise.

I might have overstated my case: I’m not talking about reflecting every possible course of action on my mental models of them (thinking “what they would think” and then heeding their advice). Rather that when it comes to some things, the empathy you feel towards the people close to you remains also after they die?

There’s been lots of talk about inheritance in these threads. What if I try to sketch a couple related examples.

I’ve already mentioned keepsakes, but sometimes, there’s more than remembrance to them.
I trust everybody has heard the literary trope “this is my father’s sword; before it was his, it was his father’s sword, and … [various parts of the sword proceed to get replaced and repaired during the course of family history, not unlike Ship of Theseus]”.

Now, how would the current holder of the sword feel if he lost the family heirloom? Not mere sadness or anger because it was the only thing his father left him or ordinary things like that. There would be more to it, because that sword had special kind of importance to it, importance which draws from the fact the sword was important to his father and grandfather (and all the ancestors before them). They are not around anymore, but the current holder does not forget them.

Another similar example that isn’t that relevant in modern world but maybe was a couple of generations back. In agricultural society, it was important to keep your house well: repair the physical structure of house itself as needed; manage everything that comes with it so that the house prospers. Before you become the head of the house, it was your parents house and they worked hard to maintain the farm. And now that it’s yours, you strive that the house has a respectable name to it, because the personal history carries some meaning beyond “it’s useful to be a successful farmer today so that you can eat tomorrow”? And what if the house would come to ruin because of you? On a more positive note, what if you manage to buy the swamp next to your property and convert it into prime farmland just like your grandfather wanted but never had chance to?

These examples are not exactly what I’m trying to describe (I don’t have any swords or land property), but it involves a quite similar sentiment.

Also, if you want to search for sources, I think I derived a great deal of this from various pieces of literary literature (at least Thomas Mann’s Buddenbrooks comes to mind).

In ancient times a related tangle of emotions related to this has often manifested as a belief in ancestral spirits (that will come to haunt you if you disrespect them; so maybe that’s where the HFA’s point “acting like the dead would still be around” comes from). But I believe such superstitions exist because of a reason.

There is actual harm. I believe adultery is evil because it is an evolutionary offense. Adultery might lead to illegitimate children. By having heterosexual sex with someone other than your spouse it is possible that your spouse might be evolutionarily less successful than they should be.

Furthermore adultery can cause genetic harm to a society by making seducers much more evolutionarily successful than they otherwise would have been.

Now, when you are alive, do you want people to honor you after you are dead? Do you find it odd that other people have such desires?

In what sense? In the sense of being a STEM dude and having written some nice papers? Yes. In the sense that people must censor their speech or behaviors because of me? Hell no. The latter is a part of what family authoritarianism is about. I would actually be disturbed if people try to codify my speech and thoughts like the Bible and force others to obey them. The worst form of authoritarianism is thought control. If you can’t talk about something or express certain views because of Uncle Joseph or Aunt Amy there is something wrong here. That’s similar to political correctness.

I do see your point. However desiring that others will cry after you are dead is unhealthy. Why must others practice asceticism because of you when you are no longer alive? If they do feel sorrow it is their choice. If they don’t so what? I don’t want even one person to be sad because of me when I pass away.

We humans and our emotions are a product of an evolutionary process; it is not the source from which we should draw inspiration and justification for what we should do or feel. We don’t need further justification for them.

Why it would be offense in evolutionary sense, anyway? Plenty of species (and some human cultures) have done tremendously well without the concept of faithful marriage.

And anyway, you could replace unfaithfulness with any lie, I just hoped to evoke some emotions associated one of the gravest lies.

We were just talking about Terry Pratchett’s books, which got me to thinking about book series, and how the quality/appeal can vary over the course of the series or the author’s career. I can think of several series where my favorite book has been midway through the series, with both later and earlier works being somewhat lesser, even if still enjoyable. (In particular, there’s the thing where the best novel is the first one, or where everyone’s favorite is the end.)

Craig Johnson’s Longmire series – my favorite is #7, Hell is Empty, with #5 The Dark Horse a close second.

Kage Baker’s The Graveyard Game is, imo, easily the best of her Company novels. (Plus, as it’s about a bunch of time traveling cyborgs in thrall to mysterious overlords, reading it out of order is less an issue than one would think.)

Of Lois McMaster Bujold’s Miles Naismith novels, Memory stands far and away above the rest, but with the distinct disadvantage that (imo) you really do have to read eight books before in order to get the effect. OTOH, Civil Campaign was just a riot and *fun*.

Any one have other long, meaty series to suggest, with midpoint highlights?

The series doesn’t really catch its stride until #3, “HMS Surprise”. I’m not sure which one is the very best, but it might be #9, “Treason’s Harbour” or #10, “The Far Side of the World”. And there’s ten more novels to go after that.

Memory is my favourite, but the runner-up is probably Ethan of Athos. I wasn’t such a fan of Civil Campaign – some parts of it were great, but others felt annoyingly preachy. (Also, what’s wrong with the guy raising lots of daughters?! He was treating them okay and you have room, dammit!)

Bujold could get preachy, tis true. She did a better job of managing it than most.

I agree that there was an argument to be made that the daughter scheme wasn’t exploitative in and of itself, but it was pretty much in line with making a horse your heir – the sort of thing that really needs to be cut off before the notion spreads to other people. Plus, you know, Barraryar – there was just so much galactic madness that they could choke down in any one decade.

(The guy’s wife offering to warm him with her plasma rifle was funny, though.)

It costars Elli Quinn, who was a major supporting character in the early Miles stories.

Speaking of whom, Bujold is apparently retiring the Vorkosigan saga with a series of light happily-ever-after stories for the supporting cast. I wouldn’t mind seeing Quinn one more time in that context.

For those unfamiliar, they’re kind of a YA Flashman-lite, about a young woman who joins various militaries and pirate fleets and has all kinds of crazy adventures in Napoleonic era England, the US, and pretty much everywhere else.

They’re very well written and engaging, but I’d say they hit their high point about halfway through the series. For the first several books, the author gets better and better, and the characters get richer, but the last few books are a little repetitive – the characters don’t have much room to change without resolving or changing the story, so the books tend to go over the same ground as earlier novels.

I think for most series, the first couple of books are where the writer is warming up and still getting their feet under them and working out the kinks and bugs. The middle books are where they hit their stride, they know and the readers know how it will go, and they’ve built up familiarity without it becoming tedious due to over-repetition of the same tropes. End books of series fall into the trap of “I hate these characters but my publisher insists I churn out another one of these because they make money” or the writing is on autopilot or the author has run out of things to say or they’ve become Too Big To Edit and get away with the excesses that their editor, during the middle period, reined in.

Often it happens that a writer writes a book without intending it to be a series, then it takes off and suddenly they have a hit on their hands – like the Ellery Queen books, where Dannay and Lee wrote the first one for a contest, won, and decided to turn their character into a series ‘tec like Philo Vance. The Nero Wolfe stories are ones that have a well-worn formula but (for me at least) managed all the same never to pall to the very last book; the mystery became of secondary interest, you read for the familiarity of the setting and the characters and how you knew they would interact in their relationships. I think the Spenser books fell into that pattern, too; even after Robert B. Parker’s death they have been continued by a successor (the publishers plainly wanting to keep the goose producing the golden eggs, as with his other series characters which are being continued as well). It’s noticeable in the Spenser books how they changed over the years from the style he used in the 70s and 80s, and it’s very noticeable how the new writer (Ace Atkins) is rolling back the character development that Parker invested, where he more or less brought the characters to a natural end as to how they would have changed over the years, so that Spenser and the other main characters are more like the selves of the 80s/90s books (the middle period of the series and the one probably regarded as the best part).

It’s been a while, but this reminds me of Asimov’s Foundation series. Maybe more if you want to tie in the Robot novels and Empire novels. But then it gets tricky, given the timeline of publication and timeline of when he decided to place them in the same timeline.

Also, it’s been so long since I read them that I’m left with this decayed sense of story. I ended up liking the first, fourth, and fifth Foundation novels most, but both my tastes and my ability to evaluate stories changed so much over that time that I can’t trust even my own sense here.

—

And of course, the meatiest, most epic series I can remember reading at all would be the King James Bible. But my reading of it was even more spread out, and I read most of it as a teenager. Genesis and Exodus obviously had the best yarns, only to have things get a little dry for the rest of the Pentateuch, and even harder to follow still for the rest of the OT, save for a few fun episodes such as Daniel or Job.

The series picks up in a big way in Matthew – pretty much an Abrams-level reset button there – then a few retellings, followed by some more boring letters. They try to give it a good wrap-up at the end, but I’m still not sure what the writer was thinking on that one.

I keep wanting to start reading Bujold’s novels because I see a lot of praise for them but I keep getting turned off by the fear of preachiness. I don’t want to read a novel that strong-arms me (even if the strong-arming is done discreetly) down the author’s route of “and this is the moral of the story which all right-thinking people will agree with”.

Also, I want to strongly approve of a character like Miles who suffers from a debilitating illness but makes a successful life despite it all but tanj dammit, couldn’t he just have been a book-keeper or something instead of Greatest Military Genius and Future Emperor and whatever else? I get a strong whiff of Marty Stu which is probably massively unfair but I’ve had my fill of “character gets kicked about something awful by fate and destiny and big ol’ meanies but triumphs in the end because they’re just that darned wonderful”.

Bujold also mentions the “Great man’s son/daughter” condition which drives Miles to try to be the greatest he can be, while never thinking he lives up to his father’s or grandfather’s shadows (she herself was daughter of a relatively well-known weatherman).

I find her novels relatively easy to digest because it is obvious Miles would not have been as good as he was (various showstoppers are conveniently ignored). He also has bouts with depression and mania, so is not that totally wonderful. And he can definitely see the skills that his cousin the Emperor of Barrayar has that he lacks, and the skills the higher-ups in the Empire of Cetaganda have that make him the equivalent of a mascot.

And at heart, most of the problems he solves are fairly basic or minor. He makes the world a better place, but he’s no superman. And while he wins battles, he never wins wars (unlike his father and grandfather).

And sometimes he’s only saved by dint of his close relationship to the Emperor – literally nepotism. And he recognizes this.

I get a strong whiff of Marty Stu which is probably massively unfair but I’ve had my fill of “character gets kicked about something awful by fate and destiny and big ol’ meanies but triumphs in the end because they’re just that darned wonderful”.

This is not an unfair characterization. If you really can’t tolerate it anymore then avoid it, though “The Mountains of Mourning” would still be a worthwhile booknovella.

Doesn’t really help much. “Compared to legendary heroes who are one-in-a-million types” sure, but “compared to ordinary guys, he’s really something extraordinary” is precisely what I’m complaining about. Bit like angsting over “sure I cured cancer, but compared to the guy who cured death, what did I do? nothing at all!” I mean, that is the definition of a Marty Stu/Mary Sue: oh sure I have all these talents and gifts, but really I’m nothing special if you compare me to [list of wonderful beings] sigh, moan, simper, saves day despite all that. Freeing prisoners and setting up the situation where they’re in a position to successfully overthrow a planetary regime and have a revolution that does not end with the leaders all paraded in a show trial and summarily executed is saving the day, God-Like Grand-da be damned!

I think I’d probably like the character better if he said “screw the whole Ruler Of The Empire family business, no way I can compete with Literal Supermen there, I’m gonna find my specific talent and be the best damn tulip breeder in the quadrant” or something 🙂

“screw the whole Ruler Of The Empire family business, no way I can compete with Literal Supermen there, I’m gonna find my specific talent and be the best damn tulip breeder in the quadrant” or something 🙂

That’s what his clone (legally younger brother) Mark does. And Miles kind of envies him this power of the second-born.

Damnit, I must have misremembered. Was it her grandfather then? It’s been years since I read the author’s notes of her books but I swore she mentioned this in one of them, possibly the one where Miles is sent to the arctic.

I’d recommend you read them. I didn’t find them very preachy, and Miles is believable in a heroic way. Bujold’s explicit tactic of asking “what’s the worst thing he can survive” and then doing it makes for interesting reading.

Thirdly, military SF isn’t my thing 🙂

I’d argue that it isn’t really military SF (which is my thing). The books are good, but they’re not quite in the same category as Honor Harrington or David Drake’s stuff. Miles is a bit too loose of a cannon for that.

Bujold’s explicit tactic of asking “what’s the worst thing he can survive” and then doing it makes for interesting reading.

If you do it a couple of times, sure. If you do it all the time? And that’s the impression I get – that she keeps turning the screw another twist and he still comes through. As I said though, I may be unfair because I’ve never read any of the books, I can’t make myself do so due to the “And Miles is so great!” fan-approval when they’re talking about them.

At least around here, pretty much all of the praise I have seen (and given) is of Bujold as a writer, and of Miles et al as characters. I can see how the sort of people who want a Mary Sue could find one in Miles, and maybe you spend some of your time hanging around people like that, but it really is a stretch for the character as written.

D, read them! At least try one. I don’t normally push books on people, but I will make an exception for Bujold, because she writes characters so well. Miles in probably my favorite character that I have ever read. Although I prefer the earlier ones in the series. The books aren’t particularly militaristic, they simply take place in a militaristic environment. I don’t remember any preachiness at all.

I think the first one I read was “The Warrior’s Apprentice.” Is that the one with the episode of Miles going to the pleasure planet and kidnapping the genetically modified female warrior, and the one that has him being a traveling judge on the home planet? I read it so many years ago, I forget the details a bit, but I fell in love with Bujold and her characters at that point.

I think the first one I read was “The Warrior’s Apprentice.” Is that the one with the episode of Miles going to the pleasure planet and kidnapping the genetically modified female warrior, and the one that has him being a traveling judge on the home planet.

No, that’s _Borders of Infinity_, a fixup novel.

I think the first one I read was _Cetaganda_ (in serial form in Analog), perhaps the worst introduction to the series there was at the time (and still not a good place to start). Then I read _Borders of Infinity_, after which I obsessively searched used book stores to find the then out-of-print earlier books.

but I will make an exception for Bujold, because she writes characters so well

Her own personality’s dispositions bleeds through her (main) characters quite a bit. But this is fine as few authors really describe other personalities well, and those who do are generally limited in their repertoire. She does well describing cousin Ivan, whose dispositions are nowhere near her own.

Drop that crown and nobody gets hurt. I’ve got a plasma arc and I’m not afraid to use it! I mean it, keep that bloody thing away from me!

Miles also has an extensive knowledge of Barrayaran law he will quote at anyone who tries to make him an Emperor.

Thirdly, military SF isn’t my thing

Fortunately, there’s about one or two novels’ worth of military SF in the entire series. Miles very definitely wants to be a Great Military Genius, because that’s what basically all his childhood role models were, but he even more than that wants to do right by his family and his nation, and there are limits to how far he can rationalize running off with mercenaries as being somehow his patriotic duty.

They’re mostly not preachy (Civic campaign and one or two of its later sequels were exceptions). In the early books Cordelia, who comes from SF California, has a civilization that’s portrayed as just as flawed as anywhere else.

In the later books she gets a reputation as the woman who can talk anyone into anything by using her cultural superiority and better arguments, but that’s somewhat forgivable in that that’s how her son portrays it, and he isn’t really objective. But this is absent in the early books. Also, there’s a nice twist in that Cordelia’s actually somewhat religious, while the people of the technologically backwards Barrayar are atheist.

A few open threads ago we talked about Lent and other types of self-denial. If I remember right, most of the benefits people (including myself) named were focused on themselves–ways in which my self-denial makes me better off at the end.

I now realize an important benefit of self-denial is when it’s done as a devotion to someone else. Personal experience has shown me this can be extremely effective at bringing the self-denier closer to the person he is devoting his self-denial to.

Self-denying in devotion to God therefore makes sense in a way I hadn’t formerly considered.

How do people develop filters–as in, ways of seeing the world that result in some particular category of thing becoming more obvious than it otherwise would be. Is there any academic literature on how filters are formed, or steps a person has to go through to get a new filter?

In non-Academic literature – Caryll Houselander’s experiences (told in The Reed of God and Caryll Houselander: Essential Writings) describe her experiences in learning to see every person’s sufferings as the Passion of Christ.

Sooo…you see someone from your outgroup – say, Richard Spencer. And you see him get punched.

That’s Christ, being scourged with whips by the Roman guards. Sara Palin and Hillary Clinton – the woman taken in adultery, about to be stoned to death. Trump – Simon “Even God Calls Me ‘Idiot'” Peter, stumbling along trying to do the best he can.

But mostly just – see Christ in everyone, and respond to the truth of that seeing, and not the outer shell of mortality.

(the phrase itself is a ref to John 6:60, and the Houselander teaching/visions references Matthew 25:31-46)

Is Christ really useful in this, though? When I see Richard Spencer or any other member of my outgroup being punched, I can love him just fine without loving him like a Christian loves Christ. I just need to recognize that he’s a human capable of suffering just like me and that the reason he’s in my outgroup is pure luck of birth – he didn’t choose his genes, his brain circuitry, his upbringing, his political beliefs, his decisions any more than I chose mine, and therefore he is no less virtuous for landing in my outgroup due to his beliefs and actions based on those beliefs than I am for landing in my ingroup due to my own beliefs and actions based on those beliefs. From that, it obviously follows for me that I want to reduce his suffering exactly as much as I want to reduce the suffering of people in my ingroup, and that I find the increase of his suffering to be exactly as undesirable as increase in suffering of people in my ingroup. Just having this empathy for all other humans (and, to an extent, all beings capable of suffering) seems enough, no Christ required.

As an aside, this reminds me of Jordan Peterson’s description of Jesus Christ as something like the culmination of the greatest good in all people (I haven’t seen or listened to any of his Christianity lectures, so my understanding of his view on this isn’t particularly good, but he’s touched on religious myths in his psych lectures). I don’t know how common it is for Christians to view Christ in this way, but learning that this was one way in which some Christians saw Christ was something really interesting and unexpected for me.

@lvln, I don’t think Jordan Peterson, interesting though he is, is an example of a Christian. Although there might be self-professed Christians who do see Christ as mostly metaphorical, that’s unorthodox, to say the least.

I agree with and want to second your major point: it seems like this is jamming Jesus in-between oneself and other people. Feeling sympathy for the suffering, even if you otherwise despise them, is normal.

That said, your list of accidents of birth is a bit troubling. If you can’t even claim responsibility for your own decisions, then you’re not responsible for anything at all. It’s an internally consistent view but also utterly insane.

That said, your list of accidents of birth is a bit troubling. If you can’t even claim responsibility for your own decisions, then you’re not responsible for anything at all. It’s an internally consistent view but also utterly insane.

I mean, sure, the lack of responsibility is a highly inconvenient conclusion, but I’m not sure it’s possible to avoid it. Our decisions are the results of the physics of our environment and our cells. People seem happy to acknowledge this when it’s the result of, say, hitting one’s head many times due to a career in football, or certain brain cells dying due to a blood clot, but there’s no distinction between the freedom of choice that those people have compared to people who haven’t suffered such injuries. People without those injuries are just as held hostage by their less-injured brains as people who have CTE or have suffered a stroke are held hostage by their injured brains.

To me, this just means responsibility doesn’t exist – or rather, that responsibility needs to be redefined so that it does exist. Which is to say, “responsibility” as we generally use it seems to have attached on it a whole lot of affect – both negative and positive – that shouldn’t be attached. Like the idea that someone who’s responsible for doing something bad deserves to be punished and to feel guilt, or the idea that someone who’s responsible for doing something good deserves to be rewarded and to feel pride. I think it’d be more useful to remove such associations from “responsibility” and only use the meaning in the sense of who actually did what. And then consider reward and punishment separately, while taking into account the costs and benefits that go into creating an incentive structure that we think is good for our society.

For instance, a murderer is “responsible” for committing murder, but it shouldn’t follow that we gleefully enact suffering on her because she’s responsible for doing something bad. Rather, whatever punishment we enact upon her should be the minimal suffering necessary such that a regime of such a punishment being carried out on murderers in our society results in a reduction of the suffering caused by murders (i.e. loss of life, grief of friends/family, higher stress of populace due to greater fear of being murdered, etc.) that is greater than the suffering of murderers under such a regime.

That seems incredibly hard, if not impossible, to calculate, and it’s clear to me that the heuristic of “responsible for bad thing -> they deserve to suffer” is a very very useful shortcut in many, perhaps most, cases. To use a mirrored example, if someone feels proud of oneself every time they solve a novel engineering problem, that person will be encouraged to solve more novel engineering problems, which will tend to make the world a better place – thus this is a good heuristic that we may even want to adopt in our society. But it also seems to me that such heuristics have pitfalls – if we encourage being proud of oneself for solving novel engineering problems, that seems likely to lead to some people jumping from that feeling of pride to feeling of moral superiority over others who aren’t as good at solving novel engineering problems, even though it was purely by luck of birth that some are better at this than others. So it seems to me that it would serve us best to start on that hard work of figuring out how to run a prosperous society which minimizes unnecessary suffering in a reality where people’s decisions and behaviors are beholden to physics entirely outside their personal control.

people’s decisions and behaviors are beholden to physics entirely outside their personal control.

This sounds weirdly dualist to me although I doubt you intend it to be. People are physical machines. Saying “They are beholden to physics entirely outside their personal control” makes it sound like physics is acting on them and they are somehow helpless, but they are physical hardware. It may be extremely complicated path dependent self-modifying hardware, but people are still stuff. They cannot meaningfully exist without physics. And as physical machines they obviously act on their environment as well. The fact that there is no soul substance somehow outside physical reality shouldn’t have any meaningful affect on our conception of morality because all that would do is push the hardware into another level.

Kipling fans out there– can anyone recommend a good mass-appeal short story to “sell” Kipling to a youngish late-teenage person of average reading ability, narrow sympathies and limited cultural literacy? Assume that the person guardedly liked “Rikki-Tikki-Tavi” but found “The Man Who Would Be King” unacceptably “long and confusing.”

I am particularly partial to those in Life’s Handicap, but I suspect the appeal is limited. Likewise “The Undertakers,” which I appreciate because it contains the oldest literary (and poetic) reference to ‘shallow reference pools’ that I know. (*)

If he likes sports, “The Maltese Cat” may be to his taste. Also try “With the Night Mail”.

Other favorites: “Her Majesty’s Servants”, The Village that Voted The Earth Was Flat, and The Cat Who Walked by Himself.

I’m fond of the Puck of Pook’s Hill/Rewards and Fairies stories. I think all of them would be accessible to the person you are describing. Also the Jungle Book stories.

For poems, “The Ballad of East and West” is not only an entertaining story, it will equip him to look down on all the people who dismiss Kipling as someone who believed that “East is East and West is West and never the twain shall meet” is about cultures. “The Last Suttee” is technically impressive and a good story, although I have some reservations on the ending twist. “A Code of Morals” might amuse some people that age.

The bit you quote is, if I remember correctly, Yorkshire, not Cockney; it’s one of the others who speaks Cockney dialect. The “soldiers three” are from Yorkshire, Ireland, and London, deliberately three very different people from different places thrown together. Not that that helps much if the written dialect just clangs for you rather than suggesting something Kipling was used to hearing and you and I are not.

Jiro, I put Kipling’s “Complete Novels and Stories” on my kindle and had the same experience — as soon as a short story introduced dialect, I had to go to the next one. His semi-phonetical approach made reading too painful. I couldn’t hear “words” the way he wrote them.

In keeping with the now established tradition of discussing Sci-Fi shows, my personal favorite, Babylon 5!

To this day, B5 remains one of the most ambitious shows ever put on television. Officially, it was a 5 year story all planned out in advance, a novel told on television. Unofficially, it wasn’t exactly that, but to a remarkable degree it managed deliver on this promises. Checkovs set up in the first episodes are fired years later, secrets and mysteries are established, revealed, and have consequences. Other than game of thrones, I can’t think of any show that has set out to tell as ambitious a story, and game of thrones had source material to work from.

There are many flawed elements of babylon 5. It is almost entirely written by a single person, J Michael Straczynski, which means a very clear voice shines through, for good or ill. When it’s bad, it’s quite bad, but when it’s good, it knocks it out of the park.

Particular mention must be made of Peter Jurassic’s masterful portrayal of Londo Mollari. Jurassic has a masterful ability to give JMS’s worst dialogue exactly the right touch to sell it. Londo Himself is one of my favorite characters in all of fiction. He goes on an amazing, and tragic journey that is foretold in the first episodes but manages to be fascinating all the way through. And this a political journey despite JMS not having a particularly deep understanding of politics.

The show was ahead of its time by at least a decade, and even well into the age of prestige television, almost no one is attempting to do what B5 did on TNT in the mid-90s. It’s a damned shame. Any other fans around?

I was a big Babylon 5 fan when it was running, but I didn’t have cable at the time, so I couldn’t watch season 5. I never have gone back and rewatched it. I should do that some day, though I wonder how well it would hold up for me.

the special effects do not hold up. At all. They weren’t great then (some of the, for lack of a better word, fight choreography was quite good, but there’s a limit to how good mid-90s computer effects could be), and they’ve actually gotten worse over time, as many files were lost and so the transfer to DVD had to use video quality.

The main story arcs, though, they still hold up, and that’s the magic of the show.

B5 well deserves the praise it gets. It has its struggle points, but – for the record – it had me here, at the end of episode 5, season 1. I yelled at the screen a lot, periodically, afterwards, but B5 is dang good stuff.

(Battlestar Galactica is still my fav space opera, but I remain willing to be won back by either Farscape or Firefly.)

It basically brought space opera to television, back before it was cool to have long story arcs. Plus, the overarching plot was really good. Much more importantly, it was coherent; he clearly had an idea of where he wanted the story to go (unlike BSG. Watching BSG was like when you’re all excited to eat a delicious pizza and then discover that there’s pineapple on top of it.)

Season 1 is painful to rewatch, but that’s true of a lot of shows. The special effects are dated. But the story is really good, and it was landmark series, and all scifi fans should watch it. And I agree that Londo is a superb character.

You have to think of Season 5 as a bonus season. They didn’t know they would get renewed until the last minute, so they had to finish the story in Season 4 (which ended up feeling a little rushed, but he still pretty much pulled it off).

Now you’ve all got me wanting some more shows in that universe. It’s a shame none of the spin-offs lasted.

But it is one of the minor testaments to that show’s greatness, that a handful of appearances as a supporting character in B5 has almost completely erased the connection between Walter Koenig and Pavel Chekov in my mind. As long as he doesn’t do the accent, at least.

“Not many fishes left in the sea, not many fishes – just Londo and me!”

“One of you will be emperor after the other is dead.”

“I’d like to live just long enough to be there when they cut off your head and stick it on a pike as a warning to the next ten generations that some favors come with too high a price. I want to look up into your lifeless eyes and wave like this. Can you and your associates arrange that for me, Mr. Morden?”

“Babylon 5 was our last, best hope for peace. It failed. But in the year of the Shadow War, it became something greater: our last, best hope for victory.”

Well, I co-moderated the usenet newsgroup, and I consulted with the visual effects company to help improve the technical accuracy, does that count?

The visual effects, FWIW were immeasurably better than classic Trek or classic Dr. Who; the only problem is that they were good enough to make us expect that they should have been better still. And they were at the state of the art for CGI work in the early 1990s; unfortunately that was the tail end of the era when truly first-rate visual effects still required the sort of model work that PTEN couldn’t afford. Also unfortunate, the original masters were lost during one of the corporate shuffles, so it would be prohibitively expensive at present to redo it to modern standards.

The storytelling was as good as anything I’ve seen on television, at least for the first three seasons. Which are the only ones I own on DVD; it’s a superb three-act story and a flawed five-act one due to the hasty replotting of the fourth season.

rec.arts.sf.tv.babylon-5 was one of the highlights of that entire series, largely because JMS himself was a frequent poster there. Before that, he was plugging the series on r.a.s.t, and I learned of B5 precisely because of his posts. And I was ready for an SF TV show that didn’t make what I considered silly mistakes, such as aliens all speaking perfect English and being humans with Bumps of the Week. And transporters and holodecks and the rest of the plot-breaking inconsistent tech. JMS had grown up on this and a boatload of book SF; it was clear from his posts that he knew his shit, and so I knew this series would deserve a watch.

There were missteps – Zima, the “alien zoo”, and some trite dialogue in so many places – but meanwhile, we were all seeing behind the scenes stuff courtesy of JMS’s posts. Remember, this was back before the Web was a thing. No Reddit AMAs, no tweets, no constant stream of online articles; and then here was this TV series whose producer / head writer was interacting with his freaking fanbase while the show was on the air. He was even a Usenet veteran; he understood the newsreader software as well as any of the rest of us did, and would reply to questions, quote comments, respond to criticism, all of it (within reason).

I also learned he could keep a secret like nobody’s damn business, thanks to the whole situation with Sinclair / Sheridan and more importantly, O’Hare. I may tell that story later, if someone else doesn’t beat me to it.

I liked the first season a lot, even given that it was finding its feet. I know Straczynski had to do some re-writing and shuffling around when the character of Sinclair was replaced by Sheridan, but he stitched up the seams remarkably well and they hardly ever show.

I much preferred the character of Sinclair to Sheridan, by the way; I really disliked the All-American Hero aspect and how many wives can one guy have on the go, anyway? Introducing his replacement who just so happens to be his former missus? While his other former missus is away with the Shadows and turns up inconveniently Not Dead when they’ve all been sure she was an ex-parrot thus making her a not-so-former missus and maybe he’s now a bigamist? And his new missus is the Big Cheese in intergalactic diplomatic circles? The later seasons were a bit over-extended and the sub-plot with the Walking Hairproducts Advertisements rogue Telepaths could have been a lot shorter – or even dropped – with no problems.

But it was a really good series and despite a few crappy episodes (forgivable, every show has them sooner or later) I have fond memories of it, and the ending was good.

Spoilers, because I’m not going to bother rot-13-ing a 20 year old show.

Sinclair was definitely a more interesting character than Sheridan, but I don’t see how the story works out as well with him leading the army of light. He has to go back and become valen before the resolution of the great war and the minbari civil war, otherwise he’d know the vorlons aren’t the good guys, so someone else would need to step in.

JMS has never laid out exactly how the story would have unfolded if Michael O’Hare had been able to stay with the show, but I believe he has hinted that Sheridan was merely brought forward in the planned chronology rather than invented out of whole cloth to fill the gap. At this point, I don’t expect to ever know more than that.

Sheridan irritated me in part – and I don’t know how much of this was intended, because JMS was smart enough to do this, and how much of it was just his character grating on me (I cheered Garibaldi when he was resistant to the Cult of Personality forming around Sheridan with that line about “He’s not the pope, he doesn’t look anything like her”) – because (a) he was put in as Earthgov’s top Earthforce officer and preferred station commander and (b) he then mutinies (you can’t really call it anything else), declares the station independent, and is in rebellion against the government and the President but then (c) demands utter loyalty and obedience from the lower ranks, including the Nightwatch – well tough mate, you just blew the chain of command to hell and gone, you can’t pull the “I’m your superior and ranking officer” card on them since you have just told your Earthforce superiors and the civilian government to take a hike! And you’re willing to fire on Earthforce ships, including those with your former comrades-in-arms, when they come to re-take the station from a mutineer! So the whole “this is disobeying the lawful orders of a superior officer and there will be consequences” shit he pulls on dissenting junior officers didn’t impress me; if he gets to decide he doesn’t want to serve under Clark because he disagrees with policy, they get to decide they don’t want to be part of a mutiny and their loyalty is to the government they swore oaths to, not to him personally.

I found Sinclair a much more interesting character; not the fault of Bruce Boxleitner who did a better job than I expected (I knew him mostly from light romantic hero parts in TV shows such as the lead in Bring ‘Em Back Alive and Scarecrow and Mrs King) but it seemed pretty clear that the studio or whomever wanted a younger, more action-orientated, romantic lead and kick-ass type character and so we got Sheridan replacing Sinclair. Maybe eventually it would have happened, as Sinclair became Valen, but I suppose Sheridan was a bit too successful at playing the anti-alien warhawk that President Clark hoped he’d be when approving him as commander of the station, as far as I’m concerned 🙂

I liked O’Hare’s acting as Sinclair, and I suppose that kind of sensitivity and vulnerability as a character was reflective of his real-world circumstances.

I believe JMS himself coined the term “method casting”, by analogy to method acting. But hiring the mentally ill actor who remained functional by stoic self-discipline, to play the mentally ill character who remained functional by stoic self-discipline, was purely coincidental.

Hiring, in 1993, the Croatian actress with the Serbian husband to play a mixed-race political leader of a planet that was scheduled to undergo an ethnoreligious civil war in the third season, that may have been deliberate.

I don’t know how much of this was intended, because JMS was smart enough to do this, and how much of it was just his character grating on me (I cheered Garibaldi when he was resistant to the Cult of Personality forming around Sheridan with that line about “He’s not the pope, he doesn’t look anything like her”)

I’m with you and Garibaldi on this, and I do think that by the end JMS was too in love with his own character to give him the skepticism he deserved. But whether intentional or not, the last year of Sheridan was a master class in storytelling about how Evil Dictators really come about, and I’m not sure I want to revisit that universe five years later.

it seemed pretty clear that the studio or whomever wanted a younger, more action-orientated, romantic lead and kick-ass type character and so we got Sheridan replacing Sinclair.

That’s the part Paul Brinkley referred to cross-thread, about JMS being able to keep a secret. Whatever the original plan was, and whatever the network might have wanted, Sheridan replaced Sinclair when he did because Michael O’Hare had a (mental) health issue that made it impossible for him to continue in the role, and which JMS promised not to talk about until O’Hare was safely dead. Which took about twenty years, during which time JMS let the world think he had knuckled under to a bunch of studio suits and was being too proudly stubborn to admit it.

To be fair, JMS did push back at such accusations. I distinctly recall language such as “you weren’t in the room when we made this decision; it was me, O’Hare, and [other producer? I forget now]”. I took from this that there was stuff going on that I didn’t know about, JMS was not going to say what, and all we fans would know for certain was that O’Hare was getting phased out.

And this certainly looked to a few fans like JMS was knuckling under. I remember many comment threads along those lines. I was quietly worried that there was a loss of confidence in his ability to helm the show. Maybe O’Hare was getting out while the getting was good? I couldn’t know either way. All I knew was that JMS had a very consistent story, with a very consistent, clearly defined hole. He didn’t sound like someone searching for a politically viable statement, and he didn’t sound like suits were speaking through him – one of the benefits of him being a direct Usenet group poster.

I ended up watching anyway – easy to do – and I got to see a pretty cool time travel story sending Sinclair off, smooth as you please, as if they’d planned it before Season 1. And then I heard about O’Hare’s passing years later, and then I got a story within that story. It was almost better than the show plot.

Today, I see tropes like “Severus Snape isn’t what he seems”, and I keep thinking that JMS / O’Hare affair beats ’em all.

But whether intentional or not, the last year of Sheridan was a master class in storytelling about how Evil Dictators really come about, and I’m not sure I want to revisit that universe five years later.

Oh indeed. As I said, Boxleitner surprised me pleasantly because I’d only seen him in rather shallow roles before that, and he proved he was up to the challenge of playing a meatier character.

But for whatever reason, as the seasons progressed, Sheridan became the Big Hero so whatever he did (and he did some shady things) it was all okay because he did it for the Greater Good (DS9 with their exploration of how Sisko and the other characters and indeed the Federation as a whole were changed and coarsened by a long, grinding war handled it better, but they came later).

For me, Sheridan (more than Clark) was the exemplar of how someone could be swept to power on a rush of popularity and with a heavy dose of manipulation by well-meaning allies behind the scenes (Delenn being his fiancée-later-wife, as well as an ally in La Résistance, representing the Minbari who are the Big Guns when it comes to war, with her not-so-subtly threatening to have the Warrior Caste come rain hell on you if you don’t fall in with John Darling’s plans). He’s literally canonised by later generations and more or less turned into a Dear Leader in the mould of the House of Kim with him modestly(!) submitting to the authority of the remnant Earthgov once Clark kills himself but taking the job as President of the newly minted Interstellar Alliance, thus at one step becoming the real power in the galaxy.

I think we are meant to see Sheridan as a flawless hero, with his real flaws being ignored or whitewashed as the show wraps up, and that’s a pity because it was a missed opportunity.

I’ve heard authors describe their protagonists as taking on a life of their own inside the writer’s head, explaining how their story has to go because of what sort of person they are.

Meanwhile, here in reality we have history books full of heroes, revolutionaries, and liberators who manipulated their way into being presidents-for-life, dictators, and tyrants, all the while keeping their formerly liberated subjects cheering for the Great Man and never recognizing what he had become. John Sheridan may be the only person ever to pull this trick, not on a population of adoring subjects, but on his own author.

But whether JMS was in on it or not, that’s what Sheridan was by the end. A literal president-for-life, with an election that took place unnoticed between episodes, with no opposition that he recognizes as anything but traitorous scum, and the most powerful military force in the galaxy loyal to him personally and willing to deal with that traitorous scum without anything resembling due process. And the story almost never dropped the facade of him being the pure white-hatted Good Guy.

Garibaldi got in a few good digs, IIRC. I wonder whether that was Michael Garibaldi whispering in JMS’s ear, or Jerry Doyle? Both good people.

I loved it; watched it religiously when it was first on. Londo was also my favorite character. I also thought it held up really well the last time I re-watched it; knowing where it was going meant I noticed more of the little bits of clever stage setting in the early episodes.

B5 is probably my favorite as well. And it’s still one of the few shows that took storytelling seriously. Yes, serialized, continuity-heavy storytelling is all over the TV now, but even still it’s rare that anyone plans anything in advance. BSG is an obvious example — yes they stuck to a storyline in a way Trek has never done for any length of time, but it’s also incredibly obvious they were making it up as they went along. Even if B5 didn’t end up exactly the way JMS had planned at the beginning, the story still hangs together better than just about anything else. (Game of Thrones does, or did, ok, because someone else did it for them, up to a point.)

As far as the production values, they definitely went for CGI a bit before it was really ready for a weekly TV series. Sets were uneven. There were a few very good actors and then a quick, large drop-off. But it’s the story that makes B5’s reputation.

Continuing the discussion on the Political Compass globalization question.

DavidFriedman’s last post on this topic, which quotes me replying to Mark V Anderson:

why not just answer “moderately or strongly disagree” if you see that prioritizing profit-seeking as superior to prioritizing humanity when it comes to actually helping humanity in the long term?

Because if you believe that you believe that prioritizing profit seeking is the way to prioritize humanity, hence the question makes no sense.

The original question wasn’t about profit seeking, it was about corporate profits, which is a far more limited subset of profit seeking.

So you’re saying, for people who believe this, that all transnational agreements (and laws pertaining to borders) which deal with movement of humans, information, goods, money, shipping, fishing, pollution, etc… across borders should prioritize corporate profits regardless of what else is negotiated away or left off the table in favor of prioritizing corporate profits, because only prioritizing corporate profits can maximize the benefit to humanity as a whole.

I might be straw-manning this position, I really am unsure, so feel free to set me straight. But if not, I’m sorry, but this seems like a dangerous monomania.

@Matt M
You wrote:

I think the point is that some people (myself included) would tell you that the best way a corporation can “serve the interests of humanity” is by maximizing profits. That the two issues are not contradictory and are, in fact, one and the same.

You misread the proposition (and I may be partly to blame for removing the focus of the conversation from globalization). This isn’t about how corporations can benefit humanity, but whether Economic Globalization should prioritize humanity or corporate profits.

So you’re saying, for people who believe this, that all transnational agreements (and laws pertaining to borders) which deal with movement of humans, information, goods, money, shipping, fishing, pollution, etc… across borders should prioritize corporate profits regardless of what else is negotiated away or left off the table in favor of prioritizing corporate profits, because only prioritizing corporate profits can maximize the benefit to humanity as a whole.

If you interpret it that broadly, nobody at all is in favor of prioritizing corporate profits. The question asked:

“If economic globalisation is inevitable, it should primarily serve humanity rather than the interests of trans-national corporations. ”

It doesn’t say “should all laws governments make primarily serve humanity rather than … .”

When are the non-corporate interests of humanity served over those of trans-national corporations in international negotiations pertaining to the non-governmental movements of things associated with the inputs, outputs, and mechanisms of trans-national corporations (e.g. humans, information, goods, money, shipping, fishing, pollution, etc…)? And are these non-corporate interests few enough, or subordinate enough, that the trans-national corporation interests should generally be preferred?

This isn’t about how corporations can benefit humanity, but whether Economic Globalization should prioritize humanity or corporate profits.

But I feel like this falls to the same criticism I leveled before… the OR implies that humanity and corporate profits are opposite and contradictory ends. They are not. As an anarcho-capitalist, I struggle to think of an action a government might take that would dramatically lower corporate profits while simultaneously benefiting humanity.

The government instantly deciding to nationalize or confiscate all the wealth of corporations and CEOs and distribute it to the poor would not, in my opinion, be a benefit to humanity on net.

What’s government got to do with it? Imagine a situation where a corporation has a choice between pursuing one strategy (say, running a marketing campaign that will make gullible people buy more than widgets than they really need) with expected profit X, and pursuing a second strategy (say, developing a new widget to sell that will help orphans) with expected profit Y < X. The first strategy prioritises profits, the second prioritises humanity.

If marketing can’t convince people to buy things they don’t need, then any time marketing leads to people buying more things (something I assume has happened, given that marketing exists) those people must have been buying suboptimally low amounts of things. Why should that be the case?

I’m not saying that products are being sold to orphans at lower-than-optimal prices. The hypothetical is that the company has decided to develop debilitating-orphan-disease-curing widgets which they will then sell to orphans at whatever price maximises their profits (resulting in total profit Y), rather than spending that development money on something else like a marketing campaign, or developing mildly-unpleasant-billionaire-disease-curing widgets (resulting in total profit X > Y).

I mean, if you know how to build the orphan-healing raygun, and it’s profitable, but you’re out of money to build it, why wouldn’t you get a loan? Or sell the patent? Or do one of a million other things that would lead to the efficient distribution of resources? Modern capitalism is pretty good at not leaving this kind of money on the table.

I can imagine circumstances where an opportunity cost leaves a good-for-humanity, profitable technology on the table, but only if it’s “only just barely, narrowly, uncertainly profitable.”

@Matt M
I have a child. Also, I used to be one. I know for certain that marketing can convince people to buy stuff they don’t need. There’s nothing magical about it.

@rlms
That said, we know that information doesn’t flow instantly. If I have a new (good) product, of course nobody knows that they are currently buying a suboptimally low amount of it. Marketing’s goal in this case is basically to communicate the new information.

Slavery at least would benefit particular corporations, those involved in the slave trade, but probably harm the economy overall by shrinking consumer demand (slaves have no disposable income and their masters have little incentive to provide them with more than the essentials). Depending on whether you think people would be more or less productive as slaves (I’ve heard arguments both ways, though I lean towards “less” except in the case of simple manual labor), it might be possible to make up the difference on the export market, but that’d fall apart if other countries followed suit.

Similar, but less clear, arguments might apply to the others. The bottom line is that live, free, and healthy people buy more stuff overall than dead, unfree, or sick ones, and that profits are determined not by how much you can make but by who’s buying and at what margins.

{slavery would}probably harm the economy overall by shrinking consumer demand (slaves have no disposable income and their masters have little incentive to provide them with more than the essentials).

This is a pretty clear example of the broken window fallacy.

Consider a hypothetical example; A man employs a woman as a maid. One day he acquires enough money to buy a slave, and so he fires the maid. Now without the maid drawing a salary, she has no money to spend, therefore you might ask: what about the profits of all the companies that manufactured the goods she bought?

Well, her salary came from her former employer, our newly minted slave master, who has saved himself the difference between the salary necessary to support her lifestyle, and the sum necessary to pay for the upkeep of his slave. He will presumably spend this money on goods or services provided by someone, and they will profit instead.

The broken window fallacy occurs when destroying some resource increases demand for it but creates a hidden opportunity cost. This is not an example.

If some members of the population become slaves that otherwise weren’t, that means some proportion of their demand for goods goes away. Let’s say for the sake of argument that their production stays the same (in reality it might not, as described above): nonetheless the value of the goods they produce has gone down, because less people have the ability to buy it. Observing that their owners now have more money to play with doesn’t get you out of the woods, because we can’t expect their demand for goods to go up proportional to their income: empirically, rich people do not spend as much of their money on consumption as poor people.

rich people spend proportionally less of their income on goods and services than poor people do.

True, but the difference is usually invested in some productive enterprise, for instance the stock market. So our slave owner, instead of purchasing the twenty five thousand dollars a year worth of housing, food, and clothes, that the maid did; spends five thousand on his slave, and invests twenty thousand in some company that uses it to buy capital goods, or invest in R&D, or something else that will increase their profit.

spends five thousand on his slave, and invests twenty thousand in some company that uses it to buy capital goods, or invest in R&D, or something else that will increase their profit.

Realistically they’d be spending something less than $20,000 on investments and the remainder on wasteful speculation and gambling (genuinely wasteful, not just long-shots), or even wasteful propaganda. This middle is something that poorer people tend not to do. What the long-run averages are between real investment, bubble-speculation, and propaganda are is a question I would be interested in an answer to.

One of the major economic drags of slavery comes not from reducing the slave’s demand, but his output. A man who is working for himself works smarter and harder. A man who works under the lash works just enough to not get lashed (and requires the lasher to spend their resources to watch him much more closely).

(This is also a major reason communist economies were so poor, communism being a reinvention of slavery in a lot of ways.)

There may be an uncomfortable edge case here, though: debt slavery. Somebody far into debt is almost definitionally not economically productive, so it’s possible that putting their work under “different management” would be a net gain for the economy.

Really? How about baning slavery, or child pornography, or enforcing laws against harvesting the organs of the homeless?

Prohibition doesn’t actually work.

In any case, I’ll concede that this may be correct, in a “letter of the law, but not spirit” sort of way. Yes, by making certain activities illegal, it essentially guarantees that the profits from illicitly providing them will flow to people who are not organized in the corporate form.

But whether or not “humanity” benefits when you have to start paying Al Capone because you can’t pay Budweiser anymore is certainly up for debate.

Second, alcohol consumption declined dramatically during Prohibition. Cirrhosis death rates for men were 29.5 per 100,000 in 1911 and 10.7 in 1929. Admissions to state mental hospitals for alcoholic psychosis declined from 10.1 per 100,000 in 1919 to 4.7 in 1928.

Arrests for public drunkennness and disorderly conduct declined 50 percent between 1916 and 1922. For the population as a whole, the best estimates are that consumption of alcohol declined by 30 percent to 50 percent.

The other half is that people like drinking. By not allowing them to do it, you diminish their enjoyment of life. It is not clear that this is a net gain for “humanity as a whole” even if you can point to statistics about death due to liver failure or whatever.

“But whether or not “humanity” benefits when you have to start paying Al Capone because you can’t pay Budweiser anymore is certainly up for debate.”
Are you saying that slavery, child pornography and organ harvesting should be legal?

Pretty sure the standard interventionist response there is “people’s actions are not perfectly calibrated to their self-interest (or our societal-interest); many of these miscalibrations are predictable, and so can be corrected for via societal mechanisms, e.g., education, jail, traditions, etc”.

The question wasn’t whether alcohol prohibition was good. It’s about whether it works at achieving its goal of reduced alcohol consumption. Assuming that the statistics are right, that means that it certainly did work.

If prostitution was legalized tomorrow, what exactly do you think would happen to the market? If prohibition doesn’t “work” then we shouldn’t see any increase in its rate. But if we do, then it clearly did work.

Yeah, this is one of these mad liberal memes – it really makes no sense.

A few years ago you had David Cameron telling us that “punishment doesn’t work” – I mean, you really do have to assume complete irrationality for that to be true, don’t you?

At the same time as we’re assuming complete irrationality (when discussing practicality), we also have to assume complete rationality – people know and do exactly what is best for them (when discussing morality).
Well, of course they don’t – people do what society tells them to do. I’m not wearing trousers because I’ve decided I prefer the feeling to that of wearing a dress – I’ve never worn a dress.

For prohibition to work the law must be aligned with the moral sense of the average citizen. I suspect the prohibition of alcohol in Saudi Arabia is quite successful at reducing consumption by those Saudis still inclined to drink; unless of course there is some hejazi Al-capone I have never heard of. Similarly the prohibition of slavery in the United States has been fantastically successful at reducing slavery.

And, unless I’m underestimating the American heterosexual male, If child pornography were legal there would be a great increase in commercially produced porn featuring twelve, and thirteen year old girls.

Does anyone know if prohibition lowered the amount of money spent on alcohol along with the amount of consumption? Or did consumption just go down because it became too expensive to drink that the old rate?

I have heard the claim that alcohol consumption was on a long decline that substantially predated Prohibition per se (and was plausibly attributed to moral suasion from the temperence movement), and that while the decline kept up during Prohibition for a while, it eventually bottomed out during Prohibition and was on the rise again before repeal.

I heard this claim too long ago to be able to attribute it, and am too tired today to Google around. So take it as “the unsourced mad ramblings of some asshole on the internet.”

For prohibition to work the law must be aligned with the moral sense of the average citizen.

So did southerners all collectively decide that slavery was immoral in 1865? No, they disbanded it because of force. Prohibitions works when people fear the consequences more than they get utility of the thing in question.

Prohibitions works when people fear the consequences more than they get utility of the thing in question.

The moral sense of the average citizen limits the consequences that can be imposed. It can get bloody interesting if there are no “average” citizens because the place they ought to be is the gap in a bimodal distribution, but you’re not going to be throwing people in jail for drinking moonshine or executing them for making the stuff in the 1930s USA.

That is true but there is a difference between the question of what kind of laws can get passed in a democratic society and whether punishment works. Kim Jong Un can set pretty much any prohibition he likes that works against the common person and it’s going to be effective because they don’t want to get thrown in a labor camp. Once you get away from North Korea, autocrats still have a lot of power to pass bills unpopular with the common people as long as someone is willing to carry it out.

This isn’t about how corporations can benefit humanity, but whether Economic Globalization should prioritize humanity or corporate profits.

This is equivalent to asking a leftist the question: “Should the government regulate the economy more, or should we favor a more healthy economy.”

I assume most leftists would say that is an unfair question, because it assumes that government regulation makes the economy less healthy. And I agree that it would be unfair to those who don’t agree that government regulation causes a less healthy economy. In the same way, your question above assumes that corporate profits don’t prioritize humanity. My personal opinion is that rising corporate profits usually do correlate to a higher level of humanity. So your question makes no sense to me.

I misquoted the original proposition. It referred solely to transnational corporations. And it didn’t refer to their profits, but to their “interests”.

If economic globalisation is inevitable, it should primarily serve humanity rather than the interests of trans-national corporations.

I understand and empathize with your point, as it is the kind of mental hopscotching I have to go through when answering the 5 propositions of the libertarian quiz that I believe are totally biased.

Ultimately we have to stop saying that the Republicans, Democrats, Greens, Libertarians, Globalists, Nationalists, etc… aren’t making any sense and recognize what kind of moral force lies behind their talking points. And we can’t do it by demanding that they rephrase their propaganda to our liking. Only then can we come to enough of a mutual understanding to agree to disagree, or possibly even agree to agree. 🙂

As to your hypothetical proposition: I’d answer that the government should regulate the economy more, even though I’d prefer to answer far more nuancedly. I recognize in which general stadium my bias lies, and am okay with picking a proposition from left field over a proposition which is outside my general ballpark.

(Answer reposted)
See here. I filtered through the comments containing political terms and found one that made me classify you as a libertarian. The 6 means that the first 5 comments I saw weren’t conclusive.

The case for is the evidence Flynn offers of the results of tests over time.

The evidence against is that we know there were some extraordinarily smart people in the distant past, which shouldn’t be the case if the whole distribution shifted as fast as Flynn’s data exist, and that things written a century or two back don’t seem to be describing a world where most people were barely above what we would consider retarded.

I thought the Flynn effect only claimed that people had started getting smarter recently (i.e. in the past 150 years or so). So, average IQ was 100 for a long time (say, from 200K years ago until the mid 19th century) but then began gradually rising (perhaps due to better environmental factors and nutrition).

Flynn only has data for the past century or so, but there is no obvious reason why whatever caused the effect should have only appeared at the point when we start having adequate data. The number suggest a rise of about thirty points over a century. By modern standards, that puts the average a hundred years ago at about the boundary between “mild retardation” and “Borderline Intellectual Functioning.” If you extrapolate a little farther back, it means that the median individual was what we consider retarded.

If the “intelligence” of the population at large hasn’t dramatically shifted, then Flynn’s evidence is telling us that IQ tests aren’t strictly measuring “intelligence.”

Alternatively, it means that environmental effects (childhood nutrition, lead poisoning) have a strong effect on intelligence and g is not predominantly genetic, since on average a child will be more intelligent than both parents.

Either way, it makes me suspect the Horrible Banned Discourse is not based on logical conclusions dispassionately drawn from the whole of current scientific knowledge.

If we’re talking about human bongo depravity (or whatever the accepted name for Bell Curve style racialism is these days) then no, it’s not.

In inferential statistics the null hypothesis is the theory that differences between two sets of observations is caused not by an underlying difference in the phenomena being studied, but by some confounding factor, often random chance.

How to go about choosing a null hypothesis is a highly controversial subject, and a lot of very bad science has been done by calculating P-values against random chance alone, and not considering more plausible alternative hypotheses, for instance flaws in the experimental procedure.

The racialist view is that ethnic differences in IQ scores are due to underlining population differences in the frequency of genes that contribute to intelligence. The most prominent anti-racialist view is that the difference are cultural in origin.

Racialists are trying to infer from psychometric scores the existence of a genetic phenomenon that has not been directly observed. Anti-racialists argue that cultural differences account for the variations.

Due to the Flynn effect, the difference between the IQ of white Americans in 1940, and the IQ of white Americans in 2017 is greater then the difference between whites and blacks today. This shows us that there are at least some large differences between groups that can not possibly be accounted for by genetic variation.

If we are going to pick a null hypothesis, it seems better to go with the more parsimonious view that there is one cause for group differences rather than two.

I’m choosing Muggle Realism as the null hypothesis, because AFAIK, there’s no reason to believe that evolution stops at the neck, where genes do substantially affect just about every other aspect of human life (or that race is a “social construct”). Like the The Nybbler notes (but from the other side), it’s not that I don’t believe that various other factors (culture, oppression, nutrition, parasites, etc) have an effect – it’s that I believe that if you control for those, genes do play a significant role.

The human population is genetically diverse, and, for genetic reasons, some phenotypic traits vary between subpopulations, but even you have to acknowledge that some traits don’t.

There is no reason to believe that evolution stops at the eyeballs either. And indeed there are likely strong genetic factors that affect differences in visual acuity between individuals. Furthermore, for reasons to do with nutrition, and access to medical care, It’s very likely that once you adjust for age Europeans have better eye sight than Africans.

Nevertheless, I have never heard anyone claim that whites have genetically superior vision.

But we would expect to see the rates of various eye problems to be different from race to race, even if we don’t have an off-hand guess on who has the better eyes. And a quick google indicates that race is indeed a factor.

(Hispanics have the most astigmatism, non-Hispanic whites have the most farsightedness, for the curious.)

Nevertheless, I have never heard anyone claim that whites have genetically superior vision.

I think you and I have different definitions of what constitutes Muggle Realism. I’m arguing from the position that human subpopulations have different genetic make-up, and these genetic differences have significant effects on their behaviours and life outcomes. The only controversial part of which that this applies to intelligence; nobody I’ve heard of claims that running speed, or height, is in no generalizable way genetic with regards to related human subpopulations.

Your idea appears to be that Muggle Realism constitutes just an assertion that whites are intellectually superior on a genetic basis.

Human bio-uniformity opponents don’t deny that malnutrition and lead poisoning affect intelligence. The claim is that once you’ve taken care of such gross damage, after that, intelligence is predominantly genetic.

I think they underestimate these confounders in many cases (particularly when talking about African intelligence), but they don’t deny they exist.

The Flynn effect is just an increase in measured test scores. The evidence for it is just the results of many, many IQ measurements people have made over time and is overwhelming. The claim that our recent ancestors were “developmentally challenged on average” is not part of the claim that the Flynn effect exists – it’s just one very naive interpretation of what the Flynn effect shows.

I think part of the Flynn effect has to do with standardised testing and the extension of compulsory education and hence literacy. People who left school at 14 with a scrappy education are never going to do so well on a written test as people who have grown up taking tests from kindergarten onwards, no matter how smart they may actually be.

I’m not saying that explains all of it, but I do think it has a part to play in the “people now do way better on verbal and mathematical written problems than people fifty or more years ago, how come moderns are so much smarter than their grandparents?” question.

The Flynn effect continued long after compulsory education, and therefore literacy, were almost universal in the US.

There are IQ tests that can bee given to illiterate people, raven’s progressive matrices for example. Wikipedia tells me it was first published in 1938, and the first Stanford–Binet test was created in 1905. In 1910 native born whites had a three percent illiteracy rate, by 1940 it was one percent. I don’t know a lot about the history of psychometric testing, but I’m not sure there is even enough data to estimate the average IQ before widespread literacy.

Historic literacy rates are an interesting question; as you say, how do we establish average IQ before tests were even invented? And if we’re only measuring “data from tests since 1905 onwards”, then I do still think that methods of testing, familiarity with test taking, who gets tested (college undergraduates versus twelve year olds) and the like does have an effect on scores.

According to a graph on this site, the mean literacy/illiteracy level in the USA was vastly stratified by race; in 1870 the illiterate percentage of the white population was 11.5% while for “black and other” it was 79.9%:

The following visualization shows illiteracy rates by race for the period 1870-1979. As we can see, in order to reach near universal levels of literacy, the US had to close the race gap. This was eventually achieved around 1980.

And as for closing the literacy gaps globally, there is this:

We can also see that younger generations are progressively better educated than older generations. And it is particularly promising that this intergenerational change is happening especially quickly in the least educated regions of our world: notice how the slopes of the lines in the least educated countries become progressively steeper.

For things like Raven’s Matrices which are supposed to be culturally neutral and all the rest of it, I still think there is a ‘trick’ to solving the problems as they scale up in difficulty; they’re not immediately intuitive, and if you’ve been educated in test-taking and the kinds of problems that crop up as mathematical exercises, I think this would be of help to you in solving the “what pattern comes next?” for the harder ones. It may not bump your scores up by a lot, but even a small increase will seem like “people are getting smarter”.

It may well be true that people are getting smarter, but this does not mean that our ancestors were all drooling idiots! Going from a median IQ of 100 to 105 is an achievement and may well be down to better nutrition, health care, education and so on. I see that the Flynn Effect is talking about apparent huge leaps ahead; if we take the “average rate of increase seems to be about three IQ points per decade in the United States, as scaled by the Wechsler tests” then from 1950 to 2010, the average IQ should have gone from 100 to 118 which does seem like one hell of a leap.

But is it? First, how much has the population increased between 1950 to 2010? (That is, merely by having more people, it’s likely that naturally there are more smart people around and before you tell me yeah but that also means more stupid people, it’s not generally the stupid people who get IQ tested unless it’s for a medical diagnosis; ordinary IQ tests are for the reasonably smart). Second, how many more people are taking IQ tests? Third, as I said, maybe increased literacy and education and simply becoming accustomed to taking tests all the time for everything in your entire school career does help bump up scores. And how have the tests changed over time? The earlier tests seem to be very easy by modern standards, but let’s look at that – maybe the earlier tests were very easy, so getting a high score on them in the first quarter of the 21st century isn’t that big an achievement after all! Since there’s been a lot of development in IQ testing over time, and the first Weschler was for kids, maybe the problems were set at a lower level than we’d assume age-appropriate.

tl;dr – I think it’s “something from column A and something from column B” to explain the effect. People are probably getting smarter when we’re talking about the average over populations due to better healthcare, nutrition, education and the rest of it, and some of it may be down to the early tests were on a small population and weren’t as refined as the modern tests. It’s probably tricky to work out how hard to make the questions such that they’re not extremely difficult for the majority to answer, and I have a feeling early tests probably erred on the side of being easier rather than harder.

The Flynn effect shows up most strongly in those tests with a greater fluid intelligence loading.

This makes sense as exposure to more scenarios would exercise and prompt use of fluid intelligence (it may have taken a hoi polloi too much time to even start using their fluid capacities way back in the day to notch a score on a timed test, and practically all tests are effectively timed – you rarely have multiple days to take them).