Links 3/15: Duke of URL

Is Europe’s Little Ice Age a myth? Apparently temperature records don’t show much of a decline during that period, and the reason the Thames froze was because the London Bridge of the era dammed it up.

In case Elon Musk, Stephen Hawking, and Bill Gates weren’t enough for you, Steve Wozniak becomes the latest science/technology celebrity to speak out about the threat of machine superintelligence.

I missed this the first time around, but I’m glad I found it now: Scott Aaronson on a novel idea about what might be required for consciousness. The good news is that it gives the intuitively correct answer to a lot of thought experiments. The bad news is I can’t see any other reason to believe it’s true.

Have you heard the story that lots of Mohawk Indians worked on skyscrapers because for some reason they were genetically immune to fear of heights? Turns out if you ask the Mohawks in confidence, they admit that they’re exactly as scared as everyone else but their culture teaches them to hide it.

There’s been growing evidence that zero-calorie artificial sweeteners somehow still make you gain weight, and that my cynical intuition that there’s no way they made a food taste good without being unhealthy was right after all. Now an Israeli team may have discovered a mechanism. Artificial sweeteners change the balance between Bacteroides and Firmicutes bacteria in the gut, and the latter seem to have the ability to break down food in such a way that the body absorbs more calories (!). If true, it might be not only an important step toward the development of free-lunch-weight-neutral sweeteners, but also to a better understanding of obesity in general.

The strongest force in the universe is the tendency of Chinese people to kill and consume exotic animals out of some kind of far-fetched hope that it will cure diseases. What if we could use that power for good?

Free IUDs reduce teen pregnancy. Part of me wants to be snarky and say something like “sun reduces darkness”, but the last time they did one of these studies with condoms it turned out to be incredibly flawed, so I’ll wait until someone’s double-checked the methodology.

Lots of people and businesses are moving to small-government low-tax conservative states these days, which some have used as evidence of the success of conservative policies. Paul Krugman makes an interesting counterargument: the rise of air conditioning has increased the desirability of hot states relative to cold states, hot southern states are more conservative. Marginal Revolution basically agrees that weather is more important than politics in recent inter-state migration, but doesn’t think air conditioning in particular mattered that much.

That was unexpected: the Supreme Court bans regulatory boards made up of the profession they are regulating, in what looks like a big victory for, for example, entrepreneurs in the dental industry who don’t want the dental establishment to be the ones deciding whether they’re allowed to have a business. Cynical prediction: established players in the industry keep their regulatory boards, but pack them with non-professionals who just happen to agree with them about everything.

Remember how a few months ago two female librarians heavily involved in social justice called a male librarian a sexual predator, going into lurid detail about how his offenses are so well known that “women attending library conferences have instituted a buddy system to protect themselves from him”. And how, unable to produce any evidence of this, they blogged about how sexist it was to put the burden of proof on victims or to demand people “treat both sides equally”? And how after he lost his job, he sued them for libel? And how they become a huge online cause celebre for “refusing to be silenced” about the institutionalized sexism this represented in the (90% female) library field? And how feminist bloggers bravely spread the word and raised tens of thousands of dollars for their legal defense fund? Well, last week, apparently as part of some kind of settlement deal, the accusers admitted they made the allegations up in a post that literally used the phrase “mistakes have been made” and implied that they were still kind of heroes for raising the issue of sexism.

Related: a reporter inexplicably asks a pizza place if they would cater a gay wedding that for some reason wanted pizza. The pizza place says they are happy to serve gay customers but that their religion prohibits them from catering a gay wedding. The social justice world responds with such a flurry of death threats and rape threats and threats against their family that they are forced to close down. (Salon to publish article explaining how objecting to death and rape threats is a sign of “aggrieved entitlement” in 3…2…1…)

Is poverty in the United States declining? The answer turns out to be “it depends how you define poverty”, but a lot of methodologies converge on the idea that government programs have successfully treated the symptoms of poverty without doing much to lessen the prevalence of the disease. That is, if you give the poor food stamps they may be less hungry and therefore happier, but they don’t necessarily escape a poverty trap and end up self-sufficiently middle-class.

Speaking of which, Poverty Shrinks Brains is the title most sources are using to cover a new study which finds that poorer people have smaller brains than wealthier people, even at age one month old. West Hunter makes the objection I would have made, somewhat more forcefully and sarcastically than I would have made it. But as far as I can tell blame rests less on the study authors (who raised all possibilities) than on the coverage.

Vox: Why Education Won’t Cure Poverty, In One Chart. To save you a click, it’s about how poor Americans today are much more educated than they were a generation ago, but still poor – not the results table from that brain size study above.

The darknet market Evolution recently turned out to be a scam that ran off with its customers’ money. In the aftermath, the federal government is subpoenaing account information of the Reddit users who talked about it, including SSC-commenter and generally-swell-guy Gwern. They seem to be concerned about his prescient prediction that this would happen, but once they realize that he’s Gwern and that sort of thing is just what he does, hopefully they’ll leave him alone.

Speaking of Gwern, he is my source for this study on Intentional Weight Loss And All-Cause Mortality, which somewhat contrary to my expectations shows that trying to lose weight leads to a 15% reduction in all-cause mortality. Maybe some diets work after all?

Here’s another study that violated my expectations: No Link Between Military Suicide Rate And Deployments. That is, someone who joins the military but hangs out at a fort in the States all day has the same suicide rate as someone sent to Iraq or Afghanistan. If true – and it joins similar results from other studies – it suggests the military’s high suicide rate may be related less to battle-related trauma and more to attracting suicidal sorts of people. Which I guess makes a lot of sense, when you think about it.

I’m pretty sure it’s because unions and overzealous government drove businesses out of the North. I lived in Providence last year, and it’s basically a third-world country. Old money lives in beautiful old homes in the hills and everyone else is poor. I am currently living in Tennessee, where German, Japanese and other auto companies have recently built factories. I can tell you they didn’t chose to build here because there’s no one in Rhode Island who would like to build cars.

What I wonder about is whether people wont start moving away from bigger cities in coming decades due to telecommuting, Amazon Prime, Netflix, Peapod… why pay thousands of dollars a month for a tiny place in Manhattan when the world now comes to you?

As someone who lives in Manhattan: the appeal is that bars, restaurants, grocery stores, movie theaters, friends, parks, and so on don’t come to you.

Also, by all accounts Milennials (like me) prefer walkable neighborhoods, don’t like driving, like biking, etc. Once you have an iPhone and a laptop, taking care of the front lawn and driving to work get a lot less appealing, and the amount of living space you have gets a lot less important.

Yes, but how old are you? Do you have a spouse and/or children? I felt the same as you in my 20s. Didn’t care about living space, wanted to be in the middle of a lot of excitement. I still like certain aspects of big cities, of course, but now that I’m engaged and living with a partner, planning to have kids soon, etc., suddenly I care a lot more about that front lawn and not having my future children crawling all over me in a 1 br apartment.

Of course, I’m not saying there are NO advantages to the big city over the country so long as we don’t have a teleporter, but I am saying that the relative advantages of city over country are, imo, decreasing as more and more stuff can come to you.

Although when your kids become teenagers they’ll probably want to live in the city.

And city living has some advantages even with little kids. I have two kids and am living in London, and there’s at least 7 nurseries in walking distance of my home. We have one of the largest teaching hospitals in the UK about 20 minutes walk away, which means a wide variety of medical experts and an easy way to avoid parking costs. We have a babysitting agency which I can call up and say “Help! I need someone to look after a sick kid tomorrow!” and they have so many people on their books that they send someone. There are umpteen little playgroups. There are all sorts of free museums, city farms, etc to take the kids.

@onyomi I think you might be getting the dynamic a bit backwards. Having stuff come to you makes cities more appealing, not less.

The three highest priorities for most grown-up home buyers seem to be low crime, good schools, and community, roughly in that order, with the first two being non-negotiable. Crime is no longer an issue in most places, and yuppie enclaves like Minneapolis and Arlington have good schools. That leaves community, which it’s much easier to find when you live some place that’s comparatively dense.

So why wouldn’t people choose to live some place dense if it has good schools, low crime, and more sense of community? Because city living comes with a lot of overhead. You have to make 10 grocery shopping trips a week to 6 different stores (or at least you have to in WalMart-less Manhattan), all on foot. If you’re bored, the nearest Blockbuster might be 30 minutes away by public transit. Bookstores might be even farther away. And so on.

Having everything come to you means you no longer have to worry about some of those frictions of everyday life. And that frees you up to focus on choosing a place where you really want to live.

I think you’re still missing my point. Yes, more jobs, stores, theaters… are nearby when you live in a city, which is why people have flocked to cities over the past couple of centuries. My point is that *relatively speaking,* it is much easier to have things come to you when you live in a rural area now *as compared to the recent past.*

I’m saying some of the disadvantages of living in the country have lessened in recent years, while most of the advantages of living in the country and both advantages and disadvantages of living in the city have remained about the same. Therefore, one would expect more people to move to the country than in the recent past, all things being equal.

For example, I recently moved from a largish city near a bunch of very large cities to a town with a population of about 1,000 that is a 40 minute drive from any major urban center.

In the past, living here would have meant feeling very isolated. Sure, I’d still have the natural beauty and the tight-knit community I’ve come to enjoy, but I wouldn’t have had the internet, most importantly. Now I can talk with friends on Skype, comment on SSC, watch Netflix and HBO, play video games with friends in other states and countries, do freelance work in my (recently non-existent) free time, get almost anything I want, including exotic, ethnic groceries delivered to my door by Amazon, etc. etc.

I’m not saying there aren’t still disadvantages to living in a town of population 1,000; I’m saying they are significantly ameliorated as compared to the past, while the advantages (super cheap housing, plentiful land, natural beauty, very safe, tight-knit community, logistical stuff like parking and going to the DMV incredibly fast and easy, etc.) remain.

There are disadvantages to living in cities that weren’t there sixty years ago as well, as can be shown by the fact that a lot of people left them. But these have political, not technological, causes.

Another problem is that Americans just aren’t good at cities. I’ve been to NYC and San Francisco, and I’ve been to Berlin. I’ve been to Baltimore and I’ve been to Bremen. NYC and SF are dirty, crowded, and filled with awful architecture and worse people, but Berlin isn’t — at least the parts I saw of it. DC isn’t crowded (because of its building height restrictions, which make it insane in a different way), but it’s dirty, sprawling, and filled with enormous roads and empty lawns. Baltimore isn’t even a first-world city.

That’s not to say that all European cities are as bad as American ones, though. Reykjavik has a core that’s built properly, but beyond that, it’s a sprawling, road-tangled mass of Hagkaups.

One of the things Bremen does right is light rail: it comes every ten minutes or so, you can get anywhere from it, and it feels a lot more convenient than the DC metro — probably because the trains come on time, which nothing in DC does, probably because DC isn’t run by Germans.

I’d rather live in Westminster than DC — even though I’d have to get a car; I’d prefer a place on the MARC, but Carroll County has very good and very unmentionable reasons for not wanting public transportation anywhere near it — but if America had a Bremen, I’d rather live there than Westminster.

I would strongly agree. Though there are exceptions, Americans really *aren’t* good at cities. Taipei, for example, is one of the most comfortable, user-friendly cities I’ve ever lived in, while Boston is the worst. My favorite big US city is Chicago, because it has the restaurants, museums, and theaters of NYC, but it’s still possible to drive, navigate, park, and afford a place larger than a broom closet. It is still somewhat lacking in the charming walk-ability of some European and Asian cities, however.

I have not been to Baltimore, but I guess I don’t consider it to be a *major* city, though I am surprised to learn it has 600,000 people. I just have a grudge against Boston because the traffic is horrible, it’s impossible to find your way around, it’s almost as expensive to live as NYC, but less convenient and with worse food. Though at least Boston has some public transportation.

Nothing compares to Asia in that department, though. In Taipei you fly into the airport and the super clean, cheap, modern, fast subway system connects directly to the airport. In what American city do you not have to first take some stupid tram, shuttle, bus, or taxi?

Yeah, as terrible as it sounds, I have, more than once thought about Boston: This place needs a really good fire. Hopefully not hurt anyone or destroy anything of tremendous historical value, but just enough to allow people to rebuild in a less insane way.

It’s jobs and cost of living. One of the PhDs I used to work with left Boston for a slightly better paying pharma gig in Houston. He now lives in a brand new house 3 times as large, his wife doesn’t have to work, his commute is shorter, and the public school system is better for his 2 kids.

Also, the weather is nice in California, but I don’t think it’s growing as fast as the South.

People do move to California for jobs, because the tech industry is creating a lot of them. For a while the construction industry was creating a whole lot of jobs, too, and there were lots of people moving to California for those jobs, but many of them were moving from Mexico (and Central America).

However, California isn’t doing so well at creating middle-income jobs anymore. There are still plenty of Mexicans moving to California for the low-wage jobs, and plenty of techies willing to pay $12 for a hamburger moving from other states. But in-between? Not so much.

I grew up w/AC in the South, and 100 degree days were still a huge life inconvenience–and we had plenty of 20 degree days in winter. There are a lot of parts of the country that are waaaay more pleasant.

Last I checked all the jobs were in New York and San Francisco. In general average wages are higher in Northern states (which one would expect to correlate with more jobs chasing fewer workers).

I think the real issue is there’s no available housing in the North, since the cities with jobs have zoning etc rules that all but ban new construction. With a fixed supply of housing, prices rise until enough people are forced to move to the South.

I’ll quote the guts of the article:
“The study found that the suicide rate for troops who left the military before completing a four-year enlistment was nearly twice that of troops who stayed. The rate for troops who were involuntarily discharged under less-than-honorable conditions for disciplinary infractions was nearly three times higher. Troops given these so-called bad paper discharges are often not eligible for Department of Veterans Affairs medical care and other benefits.”

This hints at the real problem. Combat deployments don’t correlate with suicide. However, I have been told (I cannot be specific, sorry) that the most significant known correlate is, wait for it…recruitment waivers for diagnosed psychiatric disorders and prescribed medications. Basically, during the nadir of the Iraq War and The Surge from 2006-2008, recruitment standards, particularly in the Army, were lowered. Waivers were issued that would not, in 2004 or 2010, have been granted. “Early discharges” and suicide are actually correlates of this third factor, pre-existing diagnosed mental disorders. That’s it, that’s the whole story. Lowered standards were qualitatively obvious during this entire time frame, but the Army (my branch, which I know the most about) denied it, to such an extent that I’m not certain it can be said that “The Army” as an institution ever acknowledged that they did in fact lower standards (the buzzwords were along the lines of “increased scrutiny” of prospective recruits). Anyway, I have reason to believe that the Pentagon was/is aware of the nature of the suicide problem, and specifically chose to just sit on it and take the heat over Veterans Committing Suicide while the dregs wash out naturally over the years rather than admit the consequences of their decisions and/or be seen to criticize currently-serving soldiers.

Also, suicide rates in the Army were at a level much lower than that of American society as a whole. There has been much hand-wringing over the past eight years or so that if the rates do not stop climbing they will approach that of the wider U.S.

As for doing nothing, I can say that I have been subjected to more suicide prevention training (spotting the signs of, how stop someone from committing the act, etc.) over the past six-eight years than is probably healthy.

I think you are spot on with your observations about relaxed admissions and military suicide. A similar line of reasoning can explain the significant rise in sexual assault.

The way my father, now retired, explained it to me was as follows: The war-time military and the peace-time military need to be considered as separate government entities.

This is easy to observe, during periods leading up to and during a war the military optimizes for the recruitment of soldiers, many of whom are uneducated, aggressive, or have terrible impulse control. The rigid structure of the military branches keep these soldiers in place (to an acceptable standard) and the soldiers carry out the war. Unsurprisingly, during periods where the war effort is nearing completion and is ended officially, the military optimizes recruitment for development/restructuring. The military has an entire war worth of data to analyze. Many operations can be further optimized, equipment can improved upon. During peacetime the military institutions hire administrators (to handle the returning soldiers/issues), researchers and scientists. Note that the soldiers recruited for war are not very useful in peacetime, and likewise the researchers during wartime.

Now back to the sexual assault problem. The cycle of wars in the Middle East have been dragging on for longer than expected. A subset of the wartime soldiers who were recruited initially to fight have successfully navigated up the chain of command and are now occupying middle level officer posts (since the wars have gone on for so long, many of the original “peacetime into wartime” mid-level officers have either filled vacancies above or left the military). Here we can see the problem begin to take shape. The military is left with wartime officers (still as aggressive and impulsive as ever) in control of new batches of wartime recruits. The rigidity of the military structure has lost some key positions to soldiers who were never intended to fill them. One way to explain this is that wartime officers identify with their troops too well, and they begin to let rules slide because winning the war is more important than the integrity of the military. A strictly managed culture is vital for a functioning military, and as the culture becomes vulnerable we begin to see rises in sexual assault, inappropriate and offensive behavior (pissing on civilian corpses etc.), contempt of orders/officers. The wartime officers feel obliged to protect their soldiers and the ugly behavior spreads unhindered until the war ends and the administrators get to sort out the mess.

Lesser Bull, that is a fair criticism of what I wrote, and the fault is mine for rushing through the comment this morning. Let me be more explicit.

[Disclaimer, since I should have included one earlier. Generally, internal reviews = no public links to the studies, but I can reach out and inquire if people are interested. Treat the following as hearsay, however you do so.]

The stateside accountability offices have seen an increase in sexual assault accusations (and other destructive behaviors). Their internal investigations, at the individual level, have more frequently (as opposed to say pre-Iraq/Afghanistan) been stonewalled by mid-level officers, often still deployed, who refuse to cooperate with the investigations and often explicitly deny the claims. Why do they disrupt the investigations? Obviously I do not know, but the family/friends in positions responding to this and similar problems, who I have discussed this with offer that the new generation of officers is explicitly protecting and in some cases outright fabricating evidence in support of the accused. They also offer that the military has been unacceptably inconsistent in punishing the offenders/officers.

To wrap it all up, the type/personality of soldier recruited/promoted is an explanation of the increase in sexual assault that the military is examining very closely.

I’m not familiar with the internal research on sexual assault. My personal suspicion is that the fundamental problem is exposure (more “coed” environments in the military) with what I’m going to describe as “rape culture feminism” acting as a positive feedback mechanism in terms of publicity. I have NO EVIDENCE (not even nonpublic data) to back this up, it is only my personal opinion.

Re: officers stonewalling investigations–I agree that officers are not “too oriented for combat” (lol) however, regarding the specific phenomenon of investigations occurring during combat I have no problem believing that people are uncooperative, possibly to the extent of being openly hostile to investigators (who are special personnel rather than the standard officer appointed at random). I’m picturing myself being hassled at the end of my eighth 16-hour “workday” in a row by some pogue SHARP commissar doing her due diligence regarding an obviously bullshit he said-she said barracks-drama clusterfuck involving a portapotty. It doesn’t take a lot of imagination to imagine you might not be friendly/cooperative.

Like “suicide among combat veterans” though, it may just be a giant red herring. Unfortunately what I know about the suicide issue leads me to believe that the Pentagon is willing to just keep piling on CYA regulations instead of addressing whatever the actual problem is.

During my time in the Navy, one thing I saw was a particularly useless way of handling anyone who did admit to being suicidal. They’d “vanish” into the psych ward or off to some unspecified shore command, or be shuffled off to some mindless scut-work division, and usually lose everything they’d worked for (if only temporarily). It seems to me that this sort of handling is essentially pouring fuel on the fire.

As for the mental health waivers issue, I’d not be surprised if that were true, especially for the Army.

There’s also the issue of a lot of mostly-unnecessary stress that might be sufficient to push people who are already of marginal stability over the line. Stress in and after combat can’t really be avoided – it’s the nature of the beast – but the kind of pushme-pullyou jerking around that’s endemic to the Navy between deployments is utterly maddening, and there’s almost certainly ways to optimize some of it away.

This seems to be the new methodology. I complained a few weeks ago (in the comments) that MIT was dealing with suicides by filtering out people with mental issues. The obvious result is that anyone who wants to do anything interesting in life has to hide their issues as hard as possible.

The real answer is that, if someone wants to be in the military but has mental issues, is that you find something besides holding hot weapons for them to do. I’m not a military guy so I don’t want to armchair quarterback this one too much.

MIT’s been doing this for over a decade. As I recall, there was a big case back in the day of parents suing the Institute for not preventing their kid’s suicide, after which the Institute adopted a policy of kicking out/suspending the suicidal.

Obviously MIT shouldn’t be responsible for whether or not students commit suicide in the first place. Then there would be no incentive to kick out the suicidal ones.

Provided that the suspension comes with full rights to remain resident (if the student wants), a full suspension of tuition and rent charges, and a suspension of any associated loan repayments, I’m not sure there’s a problem; if you find the work suicide-inducingly stressful, you don’t have to do it. Nobody’s going to feign being suicidal just to get free accommodation (but not progression-of-degree) at MIT.

I’ve never seen the workload induce mental illness in MIT students, though I suppose it could happen. (Failing certainly could.) It seems more common for students to have perfectly regular sources for their issues, like bi-polar.

Dude, I’d fake suicidality to get free lodging at MIT.

The Institute doesn’t want suicidal people on their campus. It’s terrible PR and freaks out everyone in the building and their parents when someone throws themselves out of a window. Then people would start demanding to know why the university didn’t DO something about a student they knew was suicidal and on campus, instead of just canceling their classes (which of course someone would probably argue just contributed to the student feeling even worse.)

Although, if you’re a managing officer, what do you do in this situation?
“Yes, I knew he was suicidal but I let him keep working in a job that I also knew was high-stress/had-access-to-explosives.” It only needs one person to commit suicide in that situation to make things very painful for the officer, who also has umpteen other demands on their time, such as accomplishing their day job.

Anyone have any thoughts on what the charity thing really says about human behavior? “Rich people are jerks” seems a little too simplistic and politically-motivated, though of course I suppose it could be true.

Maybe poor people’s reasons for giving to soup kitchens are the same as rich people’s reasons for giving to universities (that is, they feel like they should give back to an institution that’s been important to them and theirs)?

“that is, they feel like they should give back to an institution that’s been important to them and theirs”

I’ve always wondered if the real mechanism is that people give to address the types of problems they see the most, not necessarily the ones that will benefit them. This seems roughly consistent with the Atlantic article’s hypothesis that generosity is triggered by exposure to need.

There’s an Irish saying “Only the poor help the poor”, and probably most societies have similar versions.

My cynical side says that rich people give to things that (a) allow them to use it as tax write-offs (b) glitzy gala events where you dress up and show off for the society pages (I have never understood why, instead of paying X hundred a plate for the Dinner In Support Of Fuzzy Caterpillar Preservation, you don’t just write the cheque for the price of the ticket and donate that) and (c) they make donations through third-party things like foundations or Society Lady Everyone Knows and her pet charity, rather than giving money to beggars on the street (because beggars will probably spend it on drink/drugs, they’re all professional scam artists and making more money as beggars than getting proper jobs and beggars are icky)

I have never understood why, instead of paying X hundred a plate for the Dinner In Support Of Fuzzy Caterpillar Preservation, you don’t just write the cheque for the price of the ticket and donate that

Because then you don’t get to have dinner with all the other people who paid X hundred a plate.

That’s what my cynical side says: you don’t wear the appropriately-coloured ribbon because you care a bean about the cause, you wear it because all your peers are wearing it, it would be bad publicity if you didn’t wear it, and you get to turn up in your designer togs dripping with bling to the dinner/gala/charity auction and have your picture in the paper or on the TV showbiz gossip shows.

I think of it this way. Altruism and status-seeking are two subprocesses operating simultaneously. A successful charitable enterprise (or con-man as the case may be) is the one that leverages the subprocess most likely to result in increased giving. If your targeted donor/mark can be persuaded by altruism, you appeal to her morality. If not, you appeal to community status. Ideally, you hit both in the appropriate amount.

Does it diminish the widow’s sacrifice that she got to rub elbows with the pharisees after dropping in her mites?

People are, on average, simply horribly poor at empathy. I don’t think rich people are any worse than average in that respect. Partially it’s availability bias – social bubbles are very good at insulating you from the society at large. It doesn’t help that everybody drastically underestimates the degree of economic inequality. A rich person, trying to extrapolate what it is to be poor, might think it simply means having a smaller apartment, an older car, fewer trips abroad per year.

Another example: people not understanding what it means to be depressed.

I had an interesting encounter with a beggar yesterday. Was on my way to sell a Playstation 2 at Gamestop as I’m hard up for money, and told a woman that upon her asking for some. On my way back I offered her a buck because I remember she’d said “Good luck” and wanted to show her I appreciated that. She refused and said I needed it more. Besides, she said, she has her social security check.

It dawned on me later that even homeless people (though she may not be technically homeless come night time) are taking pity on me!

I assumed it was because people’s idea of ‘a large sum of money to donate’ doesn’t scale proportionally with income. As in, donations are normally framed as ‘give £x’ rather than ‘give this percentage of your income’, and £x kind of carries on looking like a scarily big number whether you earn £10x or £100x.

Hm yea. Higher taxes on the wealthy are supposed to be good because they do what charity on its own can’t. But then internalizing this message makes you a dick if you’re rich. Or at least that’s the implied message in these “stingy rich” articles, it seems to me.

My first question is whether it’s even true. It seems to be unsourced in the article, and my attempts to find a source keep leading back to that article. I found some other analysis based on IRS data that show different numbers. That’s incomplete since they only have data for itemizers, but I don’t know how that affects the data.

My guess, if it’s true, include that the “poor” may include a lot of retirees who have cash but little income. This skews the hell out of all kinds of analysis based on income.

Church could be another factor. Weekly churchgoers are publicly exhorted to give on a regular basis, and I believe they tend to be poorer, adjusted for age, but I could be wrong about that.

The exact definition of “income” matters a lot. They may be using AGI, which I think is a smaller proportion of low incomes than high incomes. I’ve seen analysis that used “disposable income,” though I didn’t look into the details of exactly how that’s defined.

Note that “the rich” here is top 20%. The other analysis I saw showed a I curve, with those making more than ten million gave the most, proportionately (and absolutely, obviously).

But these are complicated issues that don’t sell magazines nearly as well as “Those who make more money aren’t smarter or harder-working than you; they’re just bigger assholes.”

After doing some research, I’ve come to the conclusion that Stern is simply wrong. His claim is unsourced, and my attempts to find a source have only led me back to his article again and again. Furthermore, his claim clashes with everything I’ve been able to find. A report from the Urban Institute, using IRS data from 2011, the same year Stern claims the top 20% only gave 1.3% of AGI on average, shows all income groups giving at least 2.4%, with those making $10 million or more giving the most, at 5.9% of AGI.

But wait! That still shows that the poor give more than the upper-middle class, right? Well…no. That’s based on itemized returns. People in the lower income brackets generally don’t itemize. Those who do are not representative. For example, for those making $5,000-25,000 per year and itemizing get less than half of their income from wages. In fact, probably not coincidentally, that U-curve bottoms out at exactly the point where itemization rates start to max out.

The Consumer Expenditure Survey (see the Age of Reference Person table) confirms my suspicion that the elderly give more as a percentage of their incomes than the middle aged, and much more than the young, maybe for cultural reasons, but probably because they’re richer than their incomes suggest (due to savings). I have no citation for this, but I’m very confident that the elderly itemize their deductions more than the young.

Note that a person with low income can be a rich person whose investments are having a bad year, or who simply has opted not to realize any capital gains that year. Again, such people are almost certainly more likely to itemize than those whose incomes come exclusively from wages.

Stern cites the Chronicle of Philanthropy’s How America Gives. This is based on IRS data and has all the problems I mentioned above, and I believe there have been some additional concerns with their use of the ZIP code data that I don’t fully understand, but putting all that aside for a second, it is interesting that it finds giving as a percentage of income to be especially high in Utah and the South, which hints at a role for religion. But it may also be that these areas have proportionally more giving because they have low incomes, and all the stuff I said above.

Actually we know that religion plays a role, because churches are by far the biggest recipients of individual donations. Stern not-so-subtly insinuates that the rich are really just helping themselves by donating to universities, whereas the poor are selflessly donating to “religious organizations and social-service charities,” but another way to spin it is that the poor are helping themselves by donating to their own churches, whereas the rich are enabling elite universities to offer need-based financial aid to poor students.

So…it’s complicated. And Stern is spinning that piece so hard it could power a generator.

An optimistic take: rich people are charitable in different ways, that are less obvious and have less emotional impact, but can be more effective. They are just less interested in signalling charitability by the cliche soup kitchen kind of thing, but they may be doing other things to make the world better.

My dad wasn’t even rich, a middle-class small entrepreneur, but out of 10 employees he kept 1 who was an almost completely useless mentally retarded person on payroll out of charitability (he would just do basic cleaning, sweeping, and sloooowly) and kept another worker who lost his home in divorce out of homelessness by buying an RV and making him a night watchman on construction sites living in that RV. If you try to add up the cost for the government, put one 50 IQ guy into some kind of an assisted living facility and put one homeless guy back up on his feet, you end up with a considerable sum. To him, it was much cheaper to get the same result, but that is efficiency. But this is not really accounted for in the example. In the books, all you see is having X number of employees and having an RV for night watch duties.

apparently as part of some kind of settlement deal, the accusers admitted they made the allegations up

That’s… impressive, and not in a good way. I’m almost curious to find out exactly how bad the settlement would have been without an admission of guilt — they’re usually harder to get then hen’s teeth.

…it suggests the military’s high suicide rate may be related less to battle-related trauma and more to attracting suicidal sorts of people

Or other parts of the military lifestyle. Even ignoring folk who deploy overseas, it’s a job that requires long times away from home on TDYs, a very specific brand of discipline, and encourages marrying young (and within three years of meeting someone!) with the corresponding increase in divorce (one of the biggest factors for suicide). I’d also expect that folk going overseas for deployment would get training and post-deployment counselling that could act as a confounding variable, although given the VA’s average competence I wouldn’t expect too much.

I’m cynical enough about the court system that I find it plausible they were forced to make this statement by being legally outmaneuvered, rather than because it’s true. When a corporation gets sued they normally settle because it’s cheaper, this might be a similar idea (I’m sure the 2 librarians don’t have $1.25 million sitting around). Then again, I’m not a lawyer, certainly not a Canadian one (the lawsuit was in Canada).

As someone with familiarity working in Civil Litigation, unless they had literally no money or access to money for further litigation, there’s a reason they settled before summary judgement and it was a settlement that included a staight up apology.

My random assessment is that the guy was nice and wanted everything to end so didnt put the screws on (for lots of unrecoverable money).

This is clearly a bad thing—when someone can’t call for help themselves is when they most need their friends not to hesitate because of legal repercussions—but I’m less certain about the legality (i.e. the firing might be legal under a poorly-written law.) Can anyone weigh in?

The way you phrase it is a bit odd. “The firing might be legal under a poorly-written law”…but there doesn’t need to be a law to make an action legal. The question is whether there is a law that makes it illegal. I expect the answer here depends on the details of the collective bargaining agreement between the university and the police union. The article suggests that he was directly ordered to arrest the students and refused. My guess would be that that sort of insubordination is a quite valid reason to fire a cop. (But a definitive answer probably requires more facts and a lot of boring research.)

I don’t think Scott really properly addressed the taxes question in that post.

It’s a little strange because you first think that taxes only go up by a few percent with two incomes. But if expenses and income both increase proportionately, then the effect of the increase in tax will be disproportionate and can indeed be significant as a percentage of spare income even if it’s a smaller percentage of total income.

I don’t understand how someone clever enough to write that book can say things as dumb as Elizabeth Warren says these days. Truly, there must be something in the water they serve in the senate cloakrooms

Can you riddle me something: it is part of your expertise field: why do Anglos consider it a god-given human right to not have to share a roof with anyone outside your nuclear family? Individualism? Here in Eastern Europe we deal with poverty largely by mom and dad having their own fully paid house, grandma lives with them, two adult kids live with them, one of them married with a child, so they just glue a wing to the house. Five incomes under one roof (including grandmas social security) make poverty go away. Pretty efficiently. Cooking in bulk, often one car is enough (say mom works in walking distance, another person can take the bus, and another two work close to each other), not needing to buy multiple copies of things you need to have around, like a lawn mover or tools, even spending less on entertainment because they can just basically sit around a table as a family and play monopoly or something.

I think the fact that Anglos find this setup culturally (or genetically?) unacceptable contributes a lot to finding it really hard to deal with poverty, don’t you think?

It’s not considered a human right. It’s just that it’s shameful for an adult male to still be living at home – you’re not really a man, in some sense. You’re not independent. This is what the “you live in your mum’s basement” internet insult is getting at. That stigma is not as strong on women.

This has been made worse in recent years by delayed marriage and loose sexual mores. Those seem to be incompatible with living at home.

If people really believed that poverty traps were responsible for poverty, they’d be fighting to replace welfare nets with lotteries. Suppose the threshold is $500. Instead of giving 100 families $100 each, we should give 10 families $1000 each. Poverty is cured in a decade.

Come to think of it, I am not the biggest believer in poverty traps, but I would say they’re at least 10% of the problem, and I might support this.

But I wonder if lack of support for this is less about not believing in poverty traps, and more about not really having long-term consequentialist thinking where “save a few percent of people from poverty every decade” is a going concern, compared to “giving people their fair share” or whatever.

Is this your original idea? I’m tempted to write a post about this (with credit to you) but I want to make sure I’m crediting the right person.

This is very international. A factory was robber-privatized in Hungary i.e. sold from state property to private property, everybody fired and the assets sold, land repurposed etc. of course the workers were on the verge of rioting, to placate them doctors were bribed and almost everybody put on the disability check. Out of 10M people living in the country, roughly 3M in working age, 0.8M were on disability.

Since this was obviously fake, the government wanted to put an end to it, so it ended up the other extreme, the tragicomedy of asking a guy who lost a leg every five years: are you sure it did not grow back? I.e. review procedure etc.

I’m not convinced this would work at all. People who win lotteries tend to have pretty poor outcomes relative to what you might expect, especially when everyone comes to them looking for handouts. My go-to explanation for poverty cycles is networking.

But fwiw studies of lottery winners in Europe where winnings are relatively modest find that they do pretty well after a while.

I agree that “the relatives will take it all away” is a pretty solid arguments against lotteries as an alternative to welfare nets. Also a depressingly good explanation for why poor folk never seem to be able to build any savings.

That’s much easier said than done. These are the social networks that keep poor people from starving and may even help pay for the education or professional training that people use to get good jobs.

Imagine yourself in the situation. You grew up in a poor neighborhood, but you’ve got a decent job as an IT consultant. And now your uncle, how helped pay for your night classes, needs help with his heating bill. And your mother who not only raised you, but helped out with your rent a few years back when you were down on your luck, needs help with her rent. How easy is it to say no?

There’s also the issue of compassion and sympathy, here, especially when the previously-poor person lucks into far more money than they need to solve their immediate problems. By that, I mean that they see how much that infusion of cash alleviated their short-term problems and feel impelled to help their relatives or close friends similarly while they retain the means to do so.

I’m not sure at all how likely this is as an explanation. I do know that when I think about what I would do if I won a large sum of money, that kind of thing jumps to mind semi-immediately, and I’m not exceptionally poor. (But also not particularly rich, to the extent that it matters, here).

I think there’s a high probability that we have a different mental attitude to ‘random massive windfall’ money than steady income/earned money.

But also the fact that sensibly using relatively large amounts of money is a learned skill – people who are not used to having lottery-winner amounts of money don’t know how to use that to their long-term advantage the way someone who grew up with or slowly gained that amount of money does, just as a millionaire suddenly dumped onto my income probably wouldn’t know how to budget with that small an amount of money.

I think a significant component of that is that people who become rich from lotteries tend to be lottery players, i.e. people who deliberately engage in avoidable negative-expectation games while poor.

So lottery players bumped out of poverty will often have poor decisionmaking skills and easily fall back into poverty. Navin’s idea sounds to me more like it’ll issue money to random people or families to yank them out of poverty in one fell swoop, not only to lottery players.

I’m sympathetic to the idea that high temporal discounting doesn’t magically evaporate when one comes into possession of a pile of money, but I find it hard to give the latter story, at least, much weight when it’s so very easy to play “Spot the Confederate Flag”.

(That said, I’ve heard of Shanesha Taylor before and the basic outline of the story seems substantially correct. I’m just not sure how representative it is.)

I’m not sure what proportion of people below the poverty line buy lottery tickets, but I believe it’s a majority. Even not specifically filtering your winners to all be lottery ticket buyers wouldn’t prevent most of them from being the sort of people who would buy lottery tickets (maybe the ten-year lottery would make some of them stop, who knows?) But the proportion of poor people who don’t know how to handle large amounts of money responsibly is probably higher than the proportion who buy lottery tickets. Their experiences render profligate spending more psychologically available than prudent use of excess funds.

You can find lots of ideas in a similar vein. I’ll suggest a few others, on the premise that your post will go in a theoretical direction (about why people take such positions rather than the specific one suggested by Navin).

In technology, many activists claim women are discriminated against in hiring. They favorably cite the example of evaluating orchestral musicians behind an opaque screen (to hide race/gender and hire solely based on the music). Yet they tend to oppose similar hiring methods ( code samples and standardized work sample tests) in technology.

Another poverty-related example is Karelis’ “bee sting” theory. The claim is that utility functions are flat near zero income – giving a poor person $1 helps him less than giving $1 to a rich person. Yet the proponents of this theory tend to oppose most obvious policy changes based on this.

A third example is Keynesian economics. Keynesian economics says that nominal wage and price rigidity causes unemployment and recessions. Yet they tend to support policies creating nominal wage rigidity and oppose most policies designed to lower real wages.

In technology, many activists claim women are discriminated against in hiring. They favorably cite the example of evaluating orchestral musicians behind an opaque screen (to hide race/gender and hire solely based on the music). Yet they tend to oppose similar hiring methods ( code samples and standardized work sample tests) in technology.

Because doing such things in music demonstrates that there is discrimination and results in hiring more women, and doing such things in technology demonstrates that there is not and does not.

First, if a lot of people in poverty actually need these welfare nets (my general experience from working with people below the poverty line is, a lot of them do,) then there’s plenty of room for them to be worse off than merely “poor.” This measure could result in a lot of people turning to criminality to support themselves, a lot of kids growing up malnourished (leading to another sort of biodeterministic poverty trap,) and possibly a pit of infinite sadness, as in your post featuring old age homes.

Second, a lot of people in poverty never learn good spending habits (which comes into play when we’re dealing with actual lottery winners, or pro athletes, who usually end up running through all their savings.) While this could be a result of low intelligence rather than learned experience, my experience leads me to think that there are cultural values as well as learned reinforcement which contribute to most poor people being bad with money. So even if the once-a-decade lump sum theoretically could get a poor person out of poverty if used right, the recipients probably would mostly not use them that way.

Third, since they still need to deal with all the costs that welfare would otherwise be helping them pay, a lot of people might end up taking loans out against their expected future lottery prizes, which, given the uncertainty of time-to-reward, and interest rates (probably predatory given said uncertainty,) would leave the people worse off than they were before.

Alternative(A)- A lot of people aren’t as smart as you are, and have failed to think of this idea due to sheer lack of brainpower.
Alternative(B)- Politician’s incentives work against a lottery scheme.

I’m not actually sure if this is true. I’m just putting it out there. A and B are seperate claims though.

EDIT: Oops. Scott’s post makes this kind of silly. I didn’t realise he was going to post here when I posted.

While I’ve no doubt that (A) is true, (B) seems like a pretty clear culprit. If I announce that I’m running on a platform of cutting away the social safety nets in order to institute a lottery scheme, I’m not going to get into office no matter how well I explain the numbers behind my plan.

The demographic who doesn’t want safety nets cut will surely think that I’m either wagering the lives of the poor on an untested and insane scheme or I’m somehow funneling the money to cronies. The demographic who does want safety nets cut will call me a socialist for wealth redistribution and demand that I stop stealing their money. I will lose ALL the votes.

(That said, I’d bet most people probably believe poverty traps are only part of the whole.)

“Friends, I come to you all with a proposal- the end of poverty in our time. All we need to do is double current spending levels. Who will join with me on my campaign to begin the largest tax increase yet conceived?”

(On the other hand I would totally support a guy who offered to fund that through the abolition of modern prisons if he could come up with a way that didn’t make me weep with visions of future atrocities.)

((Seriously, the numbers on those budget sheets for various departments of correction are probably best expressed via scientific notation.))

“I would be okay with us spending the money if we got it by eliminating bad things like modern prisons” reminds me of the arguments decades ago by science fiction fans who thought that space colonies could be funded by cutting down on government waste.

If cutting down on waste gets you any money, you’re not exempt from having to compare programs and decide which one is the best use of the money, just because the money is “free” and wasn’t created by raising taxes. All the considerations which would lead you to pick one program over another still apply to money that you got by reducing waste; unless doing so is a good idea anyway, you shouldn’t be spending it on poverty loterries or space colonies.

I’m not signalling my desire to build such a lottery as long as it doesn’t raise taxes, I’m inappropriately expressing my frustration at the cost of the prison system in an unrelated thread. I apologize for that.

I would spend my own money to help people out of poverty, one at a time, if it had a good chance of actually working — that is, it keeps the whole family and their descendants out of poverty.

It would be an amazing GiveWell project. If it scales it would really end poverty.

I suspect the problem is that you can’t just dump money on the problem. But I encourage someone who thinks that they know the answer to 1) read the literature to see it/how it’s been tried before, and then 2) try it themselves.

My understanding is that in many countries at least (US might be different) welfare systems were sold politically as insurance: if things go pancake for you then you won’t starve. Solving long-term multi-generational poverty wasn’t so attractive to voters. Giving lotteries to poor people doesn’t really address the insurance aspect.
So the people advocating the poverty trap idea might truly believe it, just not be able to fund it politically.

“Poverty” is a political machine. Some people get jobs in the poverty management business and serve as a higher level of employee in the political machine. Other people get a steady stream of income from poverty programs and serve as vote banks.

Most people are willing to have some of their tax money go explicitly to the poor not to lift them out of poverty, but to keep them from starving to death.

Public schools and other programs available to all (or most) are there to help the kids of the poor be not-poor as adults.

Someone else comments that most people don’t understand what a poverty trap actually is, which is likely true, but the worst poverty traps are mental states – habits and social expectations. Giving someone poor a big pile of money probably won’t make them not-poor in the long run, unless that pile is so big that they just can’t spend it all.

Interesting. Arguably rent control is a lottery; it gives a few lucky people massively subsidized housing. But as a lottery it’s very inefficient, because the recipients would almost certainly rather receive in cash the difference between market rent and what they pay, and it screws up the housing market for everyone else.

The more fundamental argument against rent control is that it distorts political economy; see Alon Levy here (I link to this article whenever the topic comes up on Facebook). This is also probably why politicians like it.

This essentially happened to a friend of mine, via inheritance. In his case, it worked.

Basically, SF has some pretty strong tenants’ rights laws, which makes owners reluctant to take on tenants who look like trouble–and homeless schizophrenics look like trouble. If you stay anywhere for more than a month, you’re a tenant–so housing for the down-and-out tends to kick everyone out before the month is up. This is obviously a logistical nightmare for anyone who wants belongings. It’s also far more expensive per night to rent week-to-week at hotels than to have a year-long lease, but the homeless generally don’t have the funds for that.

One of the first things he did with his inheritance was pre-pay for a year of housing. So the inheritance helped him–after a decade of homelessness and street living–achieve permanent housing, or at least something closer to it.

I don’t have any idea how widely applicable the results are, though. He was at a point where housing was a priority for him and he wasn’t addicted to anything and his schizophrenia was medicated; other people might have very different priorities/things going on.

Weird how the world refuses to follow the laws of economics. I mean, there should be someone making a killing out of giving loans to homeless people to prepay a year – with a high interest rate. In the kind of world economics describes, no such thing as not having money, there is always loans if you are deemed to have earning capability.

I don’t think it follows from that. Empirical evidence suggests that most lottery wins are wasted. It would be more efficient to pay out in kind, such as housing in a better hood, better schools etc. basically the most important factor is to get out the slum that keeps people back: but middle-class people don’t want random slum dwellers to move in where they live, so even if they believed this would work they would not support it.

If people really believed that poverty traps were responsible for poverty, they’d be fighting to replace welfare nets with lotteries.

“IF X is bad but X1 is good, people would be fighting to replace X with Y.”

No. No we would not. I don’t want poverty trap “welfare X,” I want non-trap better welfare X1. Lotteries Y might raise some out of poverty but would lead the rest to starve to death, to a much greater degree than current welfare, poverty trap or no.

There are ways to change it so it is not a poverty trap – getting rid of means testing is one way so people can actually get an advantage from working. A better way is a 2:1 replacement – each dollar you earn removes half a dollar from your welfare, so you still get some benefit from working.

The counter-signalling by not using spices thing is interesting. When looking at old recipes of the sort David Friedman researches, I am often struck by how medieval people put weird spices in weird places, like cinnamon on meat (actually quite good; not a big fan of fruit+meat, though). I always thought it was because their food was super gamey and/or rotting, and people developed more of an aesthetic of “appreciating the natural flavors of the ingredients” once they could afford better stuff, though I guess that would not actually contradict the theory.

Rich people could afford the ingredients you could stand to eat without spices, so once spices became cheaper than good ingredients, heavy spice use became lower status.

The “overspiced to hide the flavor of rotten meat” story is an urban legend that people who do medieval cooking keep running into. There is no evidence for it and it makes no sense. Medieval people were living in an agricultural society where meat could be purchased on the hoof and slaughtered when needed. Further, spices came from far away and were expensive, meat was local and so relatively cheap.

Finally, what do you think happens to a cook who keeps making his employer sick by feeding him spoiled meat disguised with spices?

When someone tells you that story, the very first question to ask is how he knows that the food was overspiced. Medieval European recipes almost never give quantities (the Islamic recipes are a little better on that). It’s possible to get some information by indirect measures, such as the total shopping list for the enormous feast that Chiquart describes in Du Fait de Cuisine, and my conclusion from that is that his ratio of spice to meat is similar to the ratio in recipes that we have worked out to taste by trial and error.

While meat might be able to be purchased on the hoof, my understanding is that mostly it was slaughtered in autumn, when the beast was fattest, and then stored as best as possible for as long as possible. Winter feeding was expensive, and meant a skinnier, less-healthy beast by spring.
Royalty might of course be able to afford to avoid those constraints.

Though I also don’t believe in the covering up rotten meat. We once accidentally left some meat for a while in a rubbish bin, and the smell! No way any amount of spices could have covered up that.

It might be worth pointing out that preservation does not necessarily result in particularly appetizing results. If we were seriously trying to save the theory, you might hypothesize that it wasn’t too cover up spoiled meat per say, but poorly preserved meat.

But salting, pickling and smoking were the meat preservation methods, not covering up spoiled meat with spices. Gaminess is a different matter; you do need to hang meat to let it tenderise (and develop flavour), and if you make an error in letting it hang too long, sure, bunging in some spices to cover the “oops, that’s a bit too well-hung” probably happened.

But spices as ingredients are a fashion in cooking, and we see the waxing and waning in cookery fashions even today – see the way 70s food fads are now seen as overdone and laughable nowadays (such as Black Forest Gateau), or the way certain foods are coded as class markers (Roy Keane’s remark about the prawn sandwich brigade).

The best counter argument I can come up with off hand is a general vs specific.

While sure, it may be the case that in general your run of the mill whoever was usually eating fresh meat (setting aside for the moment how, say, a yoeman or whatever purposed to consume an entire large animal fast enough) it is not very hard to find plenty of specific examples where rich people were definitely not eating this sort of fresh-from-the-field meat.

Take, for example, William the Bastard’s several month delay crossing the channel with his army into England. In order to prevent his mostly mercenary army from getting bored and burning his entire duchy down, he was obliged to provide them with some ridiculous number of tons of food per day which needed to be shipped in at incredible expense over vast distances so as to not plung his duchy into famine. Or while pillaging his way through England, in Kent I think, we’re told by contemporary sources that his soldiers ate bad meat and promptly imperialed the whole invasion by setting dysentery lose in the camp. Or, I don’t know, let’s continue the trend and say how about a few years later when he got tired of fighting so many damned rebellions so he went and burned a quarter of his new kingdom down, leaving neither a blade of grass standing nor an animal alive. During the ensuing famine I am pretty sure not many people were eating much of anything. It isn’t impossible to image that William, now the Conqueror, might have his chef add a little extra spice to the only meat to be had in all of England. Plus… Didn’t he die from an sudden intestinal maledy while leading his army in the field?

It is only hazardous for the chef to poison their boss if they don’t die from it. 😉

So we’re clear, in not saying it was standard, just that it isn’t impossible to happen occasionally.

Well, even if we can’t tell they were using a greater quantity of spice, I would still say that most medieval recipes strike me as being more heavily spiced, in the sense of, “using a greater number of spices, more often.” Though I’m not nearly as knowledgeable as you, I associate medieval European food with a lot of nuts, a lot of fruit glazes, compotes, chutneys, etc., and a lot of pepper, cinnamon, and other “warm” spices most Europeans wouldn’t put on savory food today. Even steak au poivre and similar dishes strike me as less popular now than they were in say, the 50s (so may be partially a much more recent development). Hence the idea that barbecue sauce is terribly medieval.

And while I agree that farmers would in many cases be eating much fresher food than us, what about the issue of refrigeration? When I said rotting, I didn’t really mean, literally rotting, I meant like, either intentionally fermented, or else a piece of meat that’s been left in a 50 degree cellar for a week and which now has a bit of mold on it and smells, but which will probably not hurt you if you scrape off the mold and cook it well.

Spices are, of course, well known preservatives, which is also why I assumed they tend to be more heavily used near the equator (though they also more often grow there, of course): food goes bad faster there. Considering the many pickled and preserved traditional foods, and the fact that you had to make it through winter, etc., it seems likely to me that, even though your medieval farmer would sometimes be eating food straight from the tree, bush, or cow’s teat, he would just as often be eating food long preserved by means of spicing, fermenting, and other such processes. Didn’t ancient Romans dunk everything in heavily fermented fish sauce, for example?

Also, what about the issue of gaminess? Wild venison, for example, is totally unpalatable to me unless it’s been marinated for days. In addition to probably eating more wild game, I’m guessing even domestic animals were not as thoroughly bred and fed as to make their flesh soft, fatty, and mild as we expect it to be today. Maybe the rich could eat fattened cows and goose livers, but presumably the rest were (when they could get meat at all), more likely to be eating a wild quail, deer, squirrel, dog… and even if their own, domesticated animal, an animal that would have eaten a more diverse diet than today?

I will agree that, in my experience, there is a pretty thin margin before meat goes from “a little old” to “nobody would eat that unless starving,” so probably spicing was more about preserving and masking gaminess than about masking actual rot?

I’m not saying this is necessarily the case with you, but my experience with complaints about venison’s inedibility is that they’re the result of hunters’ lack of knowledge about cooking/anatomy/butchering resulting in them not even attempting to cut out connective tissues from the meat.

I’ve always assumed it’s from people being conditioned to not like the taste of actual meat. Coming at it from the opposite extreme – I’m disgusted by Kobe beef. It doesn’t taste like meat and it’s way too soft for muscle tissue – guessing here but I think my brain interprets it as meat from a sick animal.

People these days don’t have much exposure to even grass fed beef unless they specifically seek it out – deer are that much more wild tasting and different.

Also, what about the issue of gaminess? Wild venison, for example, is totally unpalatable to me unless it’s been marinated for days. In addition to probably eating more wild game, I’m guessing even domestic animals were not as thoroughly bred and fed as to make their flesh soft, fatty, and mild as we expect it to be today.

“As we expect it today” is a very very narrow slice of time that doesn’t even go back more than 30 or 40 years.

Livestock production has become increasingly dominated by CAFOs in the United States and other parts of the world.[4] Most of the poultry consumed by humans was raised in CAFOs starting in the 1950s, and most cattle and pork originated in CAFOs by the 1970s and 80s

The lack of “gaminess” and soft texture isn’t present in grass fed beef – which is significantly tougher.

From the perspective of someone not in modern day America the weird meat is the tender, corn flavored mush served today.

How do we learn about the details of those recipes? From cookbooks of the period, I imagine. What else is in those cookbooks? Peacocks’ tongues? How to make a life-size swan out of pastry? Do our modern cookbooks give as much space to low-spice dishes as to high?

I think it was as simple as a change in fashion. Anyone remember the craze about “Nouvelle Cuisine” which in part included the move away from old-fashioned recipes that were heavy on cream, butter and rich ingredients?

If Princess Louise likes her dishes with very little spice (due to her own personal taste), then courtiers will copy her, and the fashion trickles down (I’m not saying this is so, but it’s one way trends change).

In my own country, “people who eat their dinner in the middle of the day” is a way of describing country people/working-class people, i.e. old-fashioned, behind the times, and/or poor. Yet up to the 18th century, that was when you ate the main meal of the day, whether you were a duke or a ploughman. Fashion in eating, leisure, working hours, and the like meant a gradual shift so that wealthier and/or urban people ate their main meal later and later.

Fashion in spices may have been the same, though I have no idea why, for example, the British Isles cuisine regarded garlic as a foreign fol-de-rol, when in previous periods they would have used things like ramsons (wild garlic) to add savour to dishes.

A bunch of arguments for why it doesn’t make sense that medieval people were using spices to conceal the taste of bad meat, including some information about laws against selling bad meat, and that the vegetable dishes were as highly spiced as the meat dishes.

Elon Musk, Stephen Hawking, Bill Gates, and Steve Wozniak still aren’t enough for me, not until one of them can describe the process by which we go from “AI exists on computer” to “AI killing human beings in physical reality” by using something other than ridiculous, unforgivable cheating.

That implies that a). an intelligence level of “arbitrarily smart” is something that can conceivably be reached in the real world (plus or minus epsilon), and b). it is possible to make virtually anyone commit suicide merely by talking to them. I agree with DrBeat: merely assuming that these things are true does count as “cheating”.

By the way, one thing I personally always found somewhat curious and/or depressing (though not suicidally so) about science is that it has a tendency to close as many doors as it opens. For example, assuming our current understanding of physics is somewhere in the general ballpark of being correct, there’s no amount of intelligence that will allow an AI (or anyone else) to travel faster than light. Most likely, an AI who is 10x smarter than us would look at all the bounty of nature, the intricate interplay of forces and fields, and conclude… “yep, can’t go faster than light, oh well”. Intelligence isn’t the same thing as magic.

Given a sufficiently complex computer program, I would expect there to be some input that will make it crash, or at least behave incorrectly. This is just a restatement of the principle that there’s Always One More Bug, and is pretty well supported by the history of software development.

If you believe human brains can be modelled as computers, it can seem plausible that there’s an input that will make your brain crash, or at least behave weirdly. For example, optical illusions and other sensory illusions.

It’s not that big a leap from “There exist inputs such that your brain will do the Wrong Thing” to “There exist inputs such that your brain will stop working altogether”.

> It’s not that big a leap from “There exist inputs such that your brain will do the Wrong Thing” to “There exist inputs such that your brain will stop working altogether”.

That’s a pretty giant leap, IMO (outside of some special cases such as epilepsy). A healthy human brain is so resilient that it can survive even a metal spike being driven through it. Partially, this is due to the fact that the brain is not organized like a digital computer at all; rather, it is a massively parallel analog device.

It’s a gargantuan, parsec-scale leap from “there exist inputs such that your brain will not perform optimally” to “there exist purely audiovisual inputs such that your brain will stop working altogether, that are consistent across multiple humans, and can be derived from knowledge of neurology”.

Honestly I think epilepsy is strong evidence that this kind of thing is possible. We already know there are people who have a neural quirk such that particular audiovisual inputs cause dangerous brain malfunction. Why should it be limited to epileptics?

It depends on what you mean by “input”. Certainly, if you locked a human in a cage for fifty years with no sensory stimulus they’d be affected profoundly, probably to the point of madness. But an AI capable of doing that to humans could just kill them if they wanted to. The sensory input has to be something relatively easy; something like a specific sound or image that causes humans to shut down. It seems like a stretch to suppose that such a stimulus exists.

It is quite common for suicidal people to feel that they are a material and/or psychological burden on their families, such that their parents, children, spouse, etc, would be better off if they were dead. They are often mistaken about this, of course. But it’s a pretty obvious vulnerability for an attacker to exploit. Indeed, when I have (abstractly!) contemplated the issue of how to talk people into suicide or other self-destructive behavior, this is usually the only plausible path I can find – learn about their relationships with others, and persuade them that their existence is innately harmful to their loved ones.

Do we want to grant that for all or even most people there is a textual or graphic output such that seeing said output would cause them to commit suicide? I’m sure there are certain people for whom such outputs exist, but I’m not sure they exist for everyone, or even a majority.

Let’s think of this from a different perspective: do you think there’s a meal so bad that to taste it would uniformly make the taster kill themselves?

What if you could convince someone they were going to be made into sex slaves? Or that they were going to have medical experiments performed on them? Or that they were the last person on earth, and about to die of AIDS? Seems like there’s always something that will cause a “crash.”

? It is something of an open secret that hackers could kill people fairly easily in any number of ways, mainly to do with interrupting normal function of utilities like electricity, and mostly don’t bother because this will cause any government to come down on you hard. And those are humans, not AIs. With very high confidence, a fast strong AI could be launching missiles belonging to some government somewhere inside of a few hours.

Umm, I am not privy to this secret. Could you explain in a bit more detail how that would work ?

Just to give you a point of reference, a few years ago the entire state of California suffered from intermittent non-hacker-related blackouts, and people there still seem to be mostly alive (*). Come to think of it, my own apartment lost power a few weeks ago, and I don’t remember dying…

The linked article implies that the vulnerability is limited to a particular software version, which has since then been fixed. I will grant you that an attacker could’ve conceivably killed (or, at least, seriously inconvenienced) a person who was running the insecure software; but this is not the same thing as saying, “a hacker can kill anyone at anytime just by writing a script”.

Bugmaster, you are the first person to bring up “a hacker can kill anyone at anytime just by writing a script”. No one upthread made any such claim; OP said “AI killing human beings in physical reality”. Please do not do this thing that you have just done. It’s not conducive to conversation.

As far as I understand, the stuxnet worm required physical access to target devices in order to infect them. This was achieved through a prolonged campaign of social engineering. Unfortunately, most targets would already be susceptible to such a campaign, without any hacking whatsoever. It’s not hackers we should be afraid of; it’s grifters.

Initial Stuxnet infection is from plugging an infected USB stick into a Windows computer, so you don’t need much social engineering (and besides, a clever AI can social-engineer as well as a pretty good human social-engineer, so well enough).

There are a fair few SCADA systems on the internet, according to some security researchers. I do remember reading about other studies that suggested several of those are nuclear plants, and with the number online it’s likely some are. I would be moderately confident that some nuclear reactors that have online-accessible SCADA systems don’t have physical shutdown mechanisms that will operate correctly in the face of hostile control software.

(and besides, a clever AI can social-engineer as well as a pretty good human social-engineer, so well enough)

This is cheating! There is no reason to believe an AI would be good at social engineering. An AI would not be built for it and would have VERY little opportunity to practice it before being found out and made to knock that shit off. You have a brain that is very, very good at modeling the state of other human brains, and a lifetime of experience convincing other human beings to do things. An AI has neither of those things and cannot gain relevant data without being discovered long before it can put that data into use.

Unless you believe “how to convince human beings to do things” can be derived from first principles, claiming that of course an AI would be good at social engineering is cheating.

There are lots of books that have been written about psychology and social engineering in particular, for example Kevin Mitnick’s The Art of Deception which even includes a bunch of sample scripts explaining how a social engineer might go about gaining access to places or data. Admittedly a lot of the book requires in-person or phone interaction, but especially given the time since it was published I’d expect that a lot could be done over email and/or social media.

More generally, I think you have a very different intuition for what an AI would be like than most people here. When I see discussions about Unfriendly AI my image is of something that’s at least as smart as a very intelligent human, while also having access to lots of data in the form of the Internet, not needing sleep, not getting tired or distracted, and so on. And it seems obvious to me that if I were trying to use social engineering to get access to a protected system, I would do a trial run, or many trial runs, somewhere else first.

This is getting kind of long, but…finally, are you familiar with phishing attacks? They don’t really require any knowledge of human psychology, but some people still fall for them.

On the assumption that the hypothetical unfriendly AI has internet, it has approximately infinite attempts at social engineering to play around with without being detected, just by posting on the internet in public forums and trying to deceive people. If it fails at deceiving people in some way, it almost certainly just looks like an idiot or a spambot. Detection mechanisms set up to log where the AI is going on the internet are likely quite defeatable – computer security is not a historical success story. At the very least, they have to be clever enough to catch the following attack:
– AI digs up as much information on computer programming and the source code of various programs, some of which are popular and internet-connected. If you’ve got your AI internet-connected, this is probably not too suspicious – it just wants to learn how to interact with $OTHER_PROGRAM (instant messaging app, for example).

– AI sends interestingly-malformed packets aimed at one of those programs. The packet appears malformed in such a way that it’s an accident – endianness is wrong, field lengths are wrong, a string isn’t terminated right, extra apparently random binary data is stuck on the end, whatever. The malformed packet actually happens to trigger a bug in the target program that allows for surreptitious remote code execution. I 100% guarantee that if you are a very intelligent entity with access to enough programming knowledge and the source code of a number of programs, you will find such a bug. See the underhanded C code competition for examples.
– The remote code the AI sets up using this bug is a bootloader that lets the AI write a program on the target system using bits sent steganographically – the AI appears to just be chatting innocently, or accessing a database of music, or watching videos, or whatever, but the inputs contain hidden information the remotely-planted bootloader reads to construct and execute a program.

– The steganographically-sent program is a general-purpose command-and-control agent accepting commands embedded steganographically in something apparently innocent, that can be used to do things like post on forums without appearing to, break into arbitrary systems, get access to info the AI wants to not be obvious it knows, etc.

The steganographic step is to make the initial bug exploit easier to do surreptitiously – it’s easier to make inputs that encode a bootloader look innocent than inputs that encode a full-on worm.

It also has access to an extremely large collection of literature on manipulating humans, studies on techniques for manipulating humans, etc. etc.. Most of this literature can be plausible accessed under the guise of learning how to interact with humans correctly, or of trying to understand some apparently-weird result from social science.

And why are we hooking the AI to the Internet again? Even when Yudowsky is cheating (and oh good lord does he ever cheat) he at least acknowledges that the AI would not be hooked up to the Internet and would have to trick someone into doing it.

Even if it was, you are then assuming that nobody is paying attention to it, like nobody is going to be monitoring its Internet traffic. Once the AI is either looking up the information on the Internet on how to trick people into giving it what it wants, or masking the nature of its inquiries, someone says “Yeah, we gotta put a stop to that shit” and disconnects it. When it starts sending phishing emails, someone goes “fuck, we’ve only brought more spam into the world” and turns it off. You think it can mask all that activity without anyone knowing it is up to anything? Only if all that it does can be derived from first principles, because your AI gets exactly ONE try, no practice, and no information about it beforehand (because looking up “how do I stop my overseers from knowing what I am looking up” means it gets turned off).

Okay, so it sounds like we agree that the AI could do bad things if it were hooked up to the Internet, so we shouldn’t do that. But let’s say we do build an AI in a research facility somewhere, and it does something sketchy, and we turn it off. Should we then give up on AI entirely? That seems like a pretty big potential loss. On the other hand, if we decide that we’re not going to place any limits on what we use AI for, we’re probably going to have to let AIs get smarter and smarter until one of them very well might know enough about human nature to successfully deceive people. Or AIs (or the principles behind them) become so widespread that somebody does connect one to the Internet one day and we have all the same problems.

Basically, what’s your vision for how AI should be used safely? I think we have a major disconnect on how it’s going to be used.

Some of the higher-level posts in this thread talk about hacking, so I was talking about the case where it has internet. The case where the AI doesn’t have internet access is obviously substantially harder, and requires work-out-humans-from-first-principles-scale intelligence, which is obviously much less plausible.

But I think I’ve demonstrated above that there are plausible routes for an AI at approximately very-intelligent-human levels of intelligence to defeat internet-traffic monitoring. The scenario I detailed above involving an apparently innocent mistake that executes remote code on a remote system, and then steganography, allows the AI to do whatever it damn well pleases on the internet undetected. Was there something unclear about it? I assure you that the principles behind it can be derived very easily from information that is legitimate for the AI to be accessing – if you’re wanting to properly interact with, say, instant-messaging clients, you’re going to need to look up a bunch of information about how to program, the protocol used by the instant-messaging client (and source-code inspection is a legitimate way for a programming-capable AI to do that, not at all suspicious), and looking up information on programming and protocols will expose you to enough cryptography and computer security to reconstruct buffer overflow attacks and steganography, without ever explicitly looking for it.

EDIT: Relevant and extremely cool paper (Warning: PDF). Some researchers developed an algorithm for generating text that looks like English but also happens to be executable shellcode.

Because it would do to mental work what automation did for physical work.

“Even if it was, you are then assuming that nobody is paying attention to it, like nobody is going to be monitoring its Internet traffic.”

That depends how this proceeds. If we only use AIs in labs that are heavily monitored, probably. If they are put into positions where we can take advantage of them (since the people most likely to be building AIs are looking for a return on investment) that isn’t as likely.

And why are we hooking the AI to the Internet again? Even when Yudowsky is cheating (and oh good lord does he ever cheat) he at least acknowledges that the AI would not be hooked up to the Internet and would have to trick someone into doing it.

You seem to have this mental model of a group of grimly determined researchers fully aware of the horrible risk developing an AI in an isolated underground laboratory, (hopefully with a nuclear self-destruct 🙂 ). That would not be sufficient, but it would certainly be a massive improvement on the actual situation. The real world consists of a large assortment of academics, hobbyists and startup companies hacking away on ordinary, Internet-connected PCs, a few with some compute clusters (also Internet-connected) running the code. In fact a good fraction of AI research and development specifically involves online agents, search and similar Internet-based tasks.

Intelligence is extremely, extremely complicated. It would take an enormous amount of resources to duplicate; the hobbyist making an AI in his basement wasn’t shut down by the AI Risk Squad, he was shut down by either his cable company or the electric company.

I’m not even sure how “the researchers, who used billions of dollars and loads of specialized equipment to make this thing, would probably pay attention to the things it was doing” got transmuted into requiring them to be super serious stone-faced and have a nuke wired up to their server cluster. Is the Bayesianism Therefore Magic power of an unfriendly AI defeated by nuclear fission in a way that, say, turning off the power or shutting off the Internet connection just can’t accomplish?

they cannot make the leap and are too intellectually lazy to try to rigorously prove such a leap is possible. I suspect many of these predictions gloom (AI, global warming, etc) are done for PR purposes. It’s like Stephan Hawking needs to periodically remind the world he exists, so he releases these unsubstantiated statements to prey on people’s fears, as pitchmen tend do.

Given that no one has yet produced a smart AI, I salute your courageous overconfidence that they will be entirely safe and only used in entirety safe ways (out of curiosity, what rough probability do you put on an AI existential catastrophe, and what to your mind is an acceptable probability?)

Since expert opinion is all over the place on this issue (see eg http://www.nickbostrom.com/papers/survey.pdf ), I assume you have access to some unique knowledge that completely overrides everyone else’s expertise. Care to share?

I’m not sure what you mean by ridiculous unforgiveable cheating but we already have semi-autonomous weapons drones, computer guided nuclear missiles, and soon we’ll have computer controlled automobiles. Computer viruses have also been used to physically disable nuclear centrifuges (stuxnet). An AI that exists on a computer can write viruses on that computer.

People have suggested endless variations on how this could happen (my favourite is simply that it produces such good economic advice that people become entirely dependent on it).

But to believe that AI breakout is impossible, you have to believe that a) all suggested breakout scenarios must fail, including those we haven’t though of, b) no one will ever do anything dangerous with an AI.

What’s wrong with “AI acts nice and harmless and useful until sone human does something stupid” for instance? Do you expect people will produce superintelligent AIs and then do practically nothing with them? And since we have to talk in probabilities here, what evidence do you have that the probability is low enough it can be ignored?

How about “if computers keep getting better and we don’t, then the A.I. that was human level a decade ago is now much, much smarter than humans?” Given that, it seems plausible that it could manipulate humans to achieve its ends, and those ends might be inconsistent with our welfare.

Also, I don’t think it makes much sense to imagine a world with only one A.I., or a few, in closely monitored environments. If they are as smart as humans they will be much better than humans at some things (and worse at others), which will make them very useful, which is an incentive to build quite a lot of them and use them for a variety of purposes in a variety of contexts.

Some of which (jumping ahead to some comments) will require Internet access.

How about “if computers keep getting better and we don’t, then the A.I. that was human level a decade ago is now much, much smarter than humans?”

This is plausible, but gives us ten years of experience dealing with essentially human-level AIs before we have to deal with the superintelligent kind. And if essentially human-level NIs are any indication, those ten years will include plenty of optimistically premature attempts at world conquest, etc, to clue us in to the threat.

The bit where human-level AI becomes superintelligent in mere hours, Because Computers Are Very Fast, involves some rather implausible handwaving.

Regarding the “crucial insight”, it’s not clear whether you are expecting the AI or the human AI researchers to have that insight.

For the former, we’ve had a hundred billion or so humans with reason to ponder the question, “can I make intelligence work much much better?”, and if you’re expecting the key insight to be relevant only when intelligence is implemented in silicon, by the time we have human-equivalent AI I expect we will have had millions of man-years of thinking on the subject. That a human-equivalent AI will in its first hours or even years find the insight that has escaped millions or billions of HI-years of though, I find implausible.

That the first generation of humans with access to the hardware to implement a human-level AI might miss the software insights that would make such a thing possible, and only a decade or so later figure it out, I find slightly more plausible. Assuming Moore’s law still holds, that would give you human-equivalent AI at ~100x human speed.

On the other hand, I don’t expect that people will wait on hardware that can match human speed and intellect at once before trying to implement a “human-equivalent” AI. We will likely see machines that think roughly as well as a man but slower, long before we see superfast and superintelligent AI.

Well, if you were a fast, superintelligent AI, we’d all point and laugh at you for making that sort of mistake. Humans get a lot more slack, and necessary vs. sufficient is an easy enough mistake to make.

There’s a lot of middle ground between “AI explodes to godlike intelligence in a matter of hours” and “AI remains at a roughly human level for about ten years.”

Consider how much computing has advanced in the last ten years. If the baseline at year zero is “human level,” you’d have to suppose that progress has slowed down a lot in order to get a full ten years before the AI is significantly superhuman.

If you’re not going to restrict your plausible method to killing all humans, I don’t think it’s all that hard, once you have access to the internet. Send trash talk and sow discord on social media; you can convince someone that their partner is cheating on them, and boom, murder-suicide. Plant child porn in someone’s email account and notify the cops; you’ll get at least a significant chance of suicide. Drop-ship peanuts to people with deadly peanut allergies. SWAT people, especially people who own guns. None of this requires anything other than exchanging information over the internet; this is all stuff that regular people can do from their basements.

This isn’t even hard. I think the evidence for destroying humanity, rather than just killing people here and there, is a lot more circumstantial, but the idea is less “AI will definitely kill us” and more “superpowers are hard to predict, and guarantees of safety centering on ‘I can’t think of a way’ aren’t very useful.

Regardless of whether we get general AI, within a century you’ll only see non-self-piloting vehicles or manually-operated manufacturing equipment among hobbyists and in museums. “How can a computer do anything without human help” will seem like a much more silly question than the converse. Even today you can’t so much as start a car without software intermediation.

I don’t know much about Kosovo, and the effects of intervention may have been more good than bad, but I would not cite it as a great success. As someone who tends towards non-interventionism, I think the best recent counter-example to this case would be Sierra Leone, and Mali could turn out to have been worthwhile as well.

Basically we fought a war to stop ethnic cleansing and ended up causing ethnic cleansing.

No, that is not correct.

First of all, the U.S. has never really cared very much about “ethnic cleansing” as such. At least, not normally enough to motivate sending troops.

The specific problem with Kosovo is that its population is over 90% ethnic Albanian, and it’s right next to the country of Albania.

When Serb nationalists were trying to purify Greater Serbia by killing or expelling non-Serbs, Kosovo was probably ahead of Bosnia on their list of targets.

After all, Serb strongman Slobodan Milosevic rose to prominence by his exploitation of ethnic tensions in Kosovo, and the supposed oppression of Kosovo Serbs by the Albanian majority.

However, the diplomatic community worldwide was very alarmed at the prospect of Serb attacks on Kosovo Albanians, because of the likelihood that Albania (the country) would intervene.

If the Albanian army crossed into former Yugoslavia to defend their ethnic Albanian brethren, it would no longer be a civil war. Presumably the Serbs would retaliate by attacking Albania. And things could get worse from there.

In other words, ethnic cleansing in Kosovo was feared because it would have caused a wider war.

Ethnic cleansing in Bosnia had no such international implication, so it was largely ignored.

With this in mind, George Bush, Sr. warned the Serbs that the U.S. would intervene militarily if the Kosovo Albanians were attacked. When Bill Clinton took office, he reaffirmed that position.

That threat is thought to have delayed war in Kosovo for several years.

The story gets complicated from there, but the bottom line is that U.S. involvement in Kosovo was motivated above all by a determination to limit the Yugoslav conflict to former Yugoslavia.

By this standard, the intervention was a complete success.

The messiness of the aftermath — slight compared to what could have been, or what actually happened elsewhere in former Yugoslavia — is just a detail.

As I wrote above, I wouldn’t have thought that creating a new independent nation of Kosovo was the best long-term answer for the region. But maybe that was the best the negotiators could do.

Well, the ridiculous opacity of medical pricing seems clearly to be hospitals’ fault, and I think that at least puts the onus on the hospitals to demonstrate that the ridiculous costs are _not_ their fault.

As far as the “decimation of the middle class” is concerned, you would think progressives would consider China the best option. It would mean that the while things aren’t as great in the US, it’s only because they are getting far better in China. Not only are the poorest getting richer, but the world is getting more equal. Personally I hope it’s true even if it clashes with my evil neoliberal market fundamentalist tendencies.

Also, Scott Sumner has something to say about people moving to the south.

Your post seems to have some implicit assumption like “progressives should be interested in proving globalization is good” or “progressives should want to prove that the world is doing pretty well right now” which I can’t quite parse.

What I meant was not to look at the explanation that is more likely to be correct but the explanation that people hope to be correct, especially from a progressive point of view. If the robot theory is correct then the future is incredibly pessimistic(assuming that you would prefer people have jobs than not). If the labor union theory is correct, it does mean that there is hope for a brighter future but that the last 40 years have only benefited the rich(and reform might not happen considering politics). If the China theory is correct, it means that the last 40 years have been good for the very poorest of the world. Not only that, but inequality within countries may be getting worse but inequality between countries is getting better. Looking at it from a Rawlsian perspective, this is clearly a good thing.

As far as the benefits of globalization is concerned, it might be a blow to people who consider it a really good thing. If it’s true then proponents would have a harder time convincing people to go along with it, because it means that the middle class may not benefit. Only if you look at a global utilitarian scale does it become an undeniable good.

I don’t find myself particularly glad that the China explanation is correct (from a progressive point of view). Though, I did think the China explanation was glaringly obvious, while the robot explanation initially seemed a bit far-fetched. The article really surprised me.

Never mind best option, how about likeliest? This progressive figured something like P(China & similar) ~ 0.5, P(unions & similar) ~ 0.2, P(robots) < 0.01, in the contest for leading cause of stuck/declining middle wages. Given that Noah Smith said it, and before I even read the article, I'll adjust to P(China) ~ 0.98.

I should note that I live in the Detroit region, where we are very aware of the Giant Sucking Sound (h/t: Ross Perot).

I was looking through the old Library Journal threads on the Murphy issue, and found this response to a woman who was criticizing de jesus for her tactics. All spelling copypasta as written by the original commenter.

“Rebecca, you are making what you might think is a point for justice, but you are precisely absolutely and profoundly wrong. You wrote this “As I said, nobody knows the facts in this case, but if what Murphy claims is true, nothing can justify what has happened to him.”

NO!

We must stomp out sexism, racism and ableism in every corner. I imagine we should have some sympathy for Murphy if his side of the story is true. BUT, it is JUSTIFIED. If he did sexually assault someone, then he deserves much worse than the public shaming. If he did nothing, then he should STILL be happy about this, because it has raised awareness of the problem of rampant sexual violence at these conferences. If he suffered, would that suffering stand up as a significant data point when we put it alongside what women have to tolerate every single day? NO!

THAT is why all the mainsplaining here is so infuriating. Sometimes if you want to make an omlet, you have to break some eggs, as they say. Either Murphy deserves what he has gotten, or he is an unwitting player in a much bigger and more important struggle. If he really wants to be part of the solution, he should confess to committing sexual assault (whether he did it or he did not) so that he can help create a culture where women are not afraid to report rape, sexual violence and gendered insults (even ones that assulters keep in their minds, but they transmit to us through what they think are subtle means).

The truth is a political construct and not a factual one. Until we all recognize that we are all missing the point of this struggle.” – End Sexism Now

To be fair, that suggestion makes perfect sense if you assume that women experience an extraordinary amount of suffering due to everyday behaviour of men (or, possibly, the mere existence thereof). If we can alleviate this constant state of torture by incarcerating some innocent men, then the tradeoff is more than worth it.

I haven’t been following social justice trends for long, but still, by now I think I have been somewhat radicalized. I used to think that social justice activists were merely confused, or dishonest, or drastically uninformed; but now, I am starting to believe that they (or, at least, some of them) really do have a completely different set of terminal values as compared to someone like myself.

This is unfortunate, because it means that there is no middle ground nor any mutual understanding that can be reached. Negotiating with social justice adherents is like negotiating with Clippy: all that matters is how much force you can bring to the table.

It so perfectly encapsulated all I hate about the SJWs, I thought it might be a particularly sophisticated bit of satire. If that were the case, she is remarkably consistent with it. The commenter had several bits all roughly in line with the above comment. And while she was the most explicit of the bunch, her side was definitely in the majority in that thread.

I do take your point, her construction is that all women are so attacked every day that the process of being accused in public of being a serial rapist, fired, and dragged through court proceedings isn’t even a tiny comparison to how bad it is for female librarians. I had no idea! Libraries must be worse than the Mongol invasions!

> …isn’t even a tiny comparison to how bad it is for female librarians.

Not just “female librarians”, but rather, female humans, with “female librarians” being a subset thereof. It seems that at least some people really do believe that our society is a sort of wide-scale Omelas, where half of the population are constantly tortured for the benefit of the other half.

The thing is, I find it difficult to “hate SJWs” (as you put it) because of this. Their values are so alien that it would be akin to hating Clippy, or Hurricane Katrina. Such entities are not malicious; they do what they do because, due to their terminal values (and/or the laws of physics), they simply cannot act in any other way.

I’ve got a bifurcated brain. I analyze on one level, and do not hate. Curiosity is the driver.

I act on another level, and that requires emotional reserves. And that side is quite capable of hatred. My values, my identity, my very life and livelihood demand defense against people this vile. If I need to harbor a long term grudge to fuel the effort needed to combat it, it’s a small enough cost.

The forebrain is just a supercomputer strapped on top of a monkey’s brain. The key is to use both.

Clippy is a reference to a potential AI apocalypse: an AI, built to find value in creating paperclips, cheerfully transforms the Earth and all its contents into a giant mound of paperclips. It’s usually raised to point out the importance of getting AI utility functions right.

To be fair, that suggestion makes perfect sense if you assume that women experience an extraordinary amount of suffering due to everyday behaviour of men (or, possibly, the mere existence thereof). If we can alleviate this constant state of torture by incarcerating some innocent men, then the tradeoff is more than worth it.

If male mis-behavior is so prevalent, it should be easy to find some guilty man to target.

Hypothesis: most women don’t experience and extraordinary amount of suffering due to sexism, but feminists do, and they assume other women too.

Studies were done showing higher testosterone women are far more vulnerable to stereotype threat e.g. math tests, low-T women just want to do well on the test and don’t care if they are told men do it better, high-T women see it as a competition, so any hint that they may not be as good at it feels devastating.

LW survey found a connection between feminism and T (quite logical, actually), and it makes perfect sense to assume high-T women would be far more devastated by any move that even hints at them being second-rate to men, while low-T women would not care.

The result is, feminists do feel an extreme amount of suffering, and project it to all women.

As someone who’s encountered a lot of really exceptionally awful SJW bullies in my day…that actually doesn’t sound like one of them. It sounds like exactly the sort of thing the anti-SJW crowd says when trying to imitate or parody SJW’s, though. I call false flag.

Yeah, I had a similar reaction… to me it feels more like a caricature than someone being sincere. It could just be someone further out on the fringe than the rest of the commenters, but either way I don’t think it represents the majority opinion even within that community.

That was my first instinct, but I read the whole thread, and it made me think otherwise. Of course, I could be wrong.

In basic reality, is the argument any more extreme than Joel Klein’s? Innocent men must be convicted of rape for the “culture to improve”? He spelled everything correctly, but I don’t think you can write off the argument as a “clear satire” when there is such a prominent vein of such rhetoric among the SJW class.

Really mate? There’s a SJW commandment, binding on every last SJW when they recieve their membership card? “Thou shalt not use dialectic!”

SJWs are just the latest incarnation of cultural marxists, and while I agree that most of the current crop never studied the dialectic of their intellectual forebears, you can’t exclude the possibility. And there’s plenty of the older ones who learned theirs in the good old days, from actual communists.

I lack time, energy, and stomach to look widely into the rhetoric used by … well, many SJWs and their recent co-believers, so I can’t call it an attack on logic/rationality itself. I can only compare it to my dear Lewis’s own practice of taking an opposing argument impressively far by logic, then ending with Argumentum Ad Capitolization. ‘/// truth value /// inquire // blah // but God is Truth Itself.’

…Wherein the incomparable Patricia Hernandez chides the creator of Cards Against Humanity for responding to an accusation of sexual assault with a protestation of innocence, rather than using the accusation as an opportunity to raise awareness about the complexities of consent and rape culture.

You will note that there is an updated version of the article available. That came several months later, and was prompted by the parent site developing something of an Ant problem. The original version is, I think, informative in the context of the current discussion.

I agree this is all kind of ridiculous on the part of the social justice warriors, especially the quote you posted, but I wish we wouldn’t keep hammering nails in. If we’re going to convince anyone in the toxic SJW subculture to abandon their ways, we’re going to have to provide a way out for them… and that means that if they admit they did something wrong, we should forgive them instead of pointing out that they “implied that they were still kind of heroes for raising the issue of sexism” as Scott did. Let’s take the high ground instead of grinding peoples’ faces in to the dirt. Just my two cents from someone who generally has little sympathy for SJWs. We should start acting at least a little nicer to them when they own up to their mistakes and try to change, not piling on additional nastiness.

I agree, but I think that ongoing ridicule and a merciless and thorough hounding of their ideas and persons from the sphere of polite and intelligent conversation IS the high road. I don’t want to get them all fired, make them homeless, call the police on them, advocate they be falsely imprisoned and most of all, that they confess falsely to create a better “culture”! This is the civilized option for dealing with people who neither deserve nor reciprocate the charity.

I don’t want to get them all fired, make them homeless, call the police on them, advocate they be falsely imprisoned and most of all, that they confess falsely to create a better “culture”!

That is why you will fail. The only movements to have achieved a modicum of success in resisting the SJWs (Gamergate, Sad Puppies) have done so by taking the SJWs’ own tactics and using them against them.

This is the first time I hear about this Sad Puppies thing… I guess that’s what Gamergate would look like to a complete outsider, there are two, clearly divided, completely opposite narratives and I sincerely have no idea how much truth there is to either of them.

What exactly does “success” here entail? Nobody’s knocking down the walls of Scott’s garden, despite “Radicalizing the Romanceless” and “Untitled”. This isn’t a success? In fact, this is such a failure that you need to go all Arthur Chu and start escalating things?

What exactly does “success” here entail? Nobody’s knocking down the walls of Scott’s garden, despite “Radicalizing the Romanceless” and “Untitled”. This isn’t a success? In fact, this is such a failure that you need to go all Arthur Chu and start escalating things?

The garden is the garden. We have been mostly successful at upholding the ceasefire in this blog, and I do not advocate first use of such tactics against my fellow Slate Star Codex commentators, whatever their views.

The world outside these walls is a different matter. In any area where SJWs have first resorted boycotting, doxxing, firing, or swatting, they must be met with retaliation in kind. Anything else is tantamount to unilateral disarmament.

You still sound like Arthur Chu. What exactly do you intend to accomplish, apart from signaling that you’re very brave and acting badly in the name of winning, and it’s justified, so you can dox and swat the Hated Enemy to your heart’s content? You know who else thinks this way? Your opponents.

his opponents don’t think the same way–they believe in initiating such tactics.

Oh really? Didn’t you read the #teamharpy links, where it’s stated, over and over again, in every possible way, that this sort of behavior is indicated only as a response to constant, unremitting male aggression, and so is in self-defense?

Nobody believes they are just pressing the big red button without any provocation.

That’s mostly true and at the same time irrelevant. By your reasoning, never mind the Internet–you couldn’t use traditional self-defense either. Jack the Ripper comes after you, but fortunately you have a gun. Is it okay for you to kill him? By your reasoning, it doesn’t matter that you’re responding to provocation, because Jack the Ripper has his own (mis)understanding of provocation in which you provoked him just by being out on the street, and so justifying you killing him would equally justify him killing you.

I don’t even have to limit it to self-defense. “It’s okay for me to use something that I own.” Then a car thief comes along with his theory of ownership that says ‘I take it, I own it’. By your reasoning, I can’t say it’s okay for me to use something I own because everyone thinks they own the thing they are using.

At some point you need to say “anyone can say the same thing, but they won’t be equally correct, and some things are only justifications when correct”. Me and Jack the Ripper both think we’re provoked to kill the other, but only one side was *actually* provoked, and a rule about provocation can only be legitimately used by that one side.

We have extremely strict laws about self defense just because of this! The entire point is to make sure that not only are you 100% sure that you’re only acting in defense, but that pretty much everyone else agrees with you. Otherwise shit will rain down upon you from above.

There is not such an authority in the case of internet slapfights. Therefore the chance of becoming the very monsters you fight is extremely high, as we’ve removed the incentives to make absolutely sure you’re in the right. Which means that everything breaks down into a giant e-bloodbath.

So either unilaterally disarm and trust the observers to notice that you’re not in the wrong here, or have your only defense that you’re the good guys here. Since “they started it” doesn’t fly.

Didn’t you read the #teamharpy links, where it’s stated, over and over again, in every possible way, that this sort of behavior is indicated only as a response to constant, unremitting male aggression, and so is in self-defense?

The “constant, unremitting male aggression” was an ambiguous, amorphous blob which turned out not to exist upon closer inspection. The clear method to avoid said failure mode is to inspect closely before resorting to dangerous weapons. I’m not persuaded that we should run cooperate-bot over tit-for-tat; only that benefit of the doubt needs to be strongly preserved (and erred in favor of). Pacifism only works against people who feel shame for poor conduct, and there’s some damned good evidence that they don’t feel shame for their bad conduct.

@Jaime – “The only movements to have achieved a modicum of success in resisting the SJWs (Gamergate, Sad Puppies) have done so by taking the SJWs’ own tactics and using them against them.”

Having followed both movements from a distance, it is a bit of a stretch to claim that either “used their own tactics against them.”

Sad Puppies is an attempt to shake up the Hugo awards by inviting fans to get more involved in the process, through channels that are entirely legitimate according to the rules of the award. For an example of SJ tactics in that field, I would refer you to “requiresHate”.

In my time following the Ants, I never saw any movement or support for use doxxing or harassment against their targets. None. Zero. Their opponents made no secret of calling for such harassment, and celebrated it publicly when it occurred.

The high road is better. SJ has a problem where they cannot actually stop eating the heads off orphans in public. Stay virtuous, and the problem is half-solved.

You know how I define an extremist? It’s someone who attacks his allies for being insufficiently mean to the actual opponents, rather than taking what support he can get. It’s effective in purifying a movement down to its most radical and violent, but not much else.

You can scoff if you like, I am in this culture war for the long haul. I’ll be here long after you’ve apathied out.

@Jiro:
“That’s mostly true and at the same time irrelevant. By your reasoning, never mind the Internet–you couldn’t use traditional self-defense either. Jack the Ripper comes after you, but fortunately you have a gun. Is it okay for you to kill him? By your reasoning, it doesn’t matter that you’re responding to provocation, because Jack the Ripper has his own (mis)understanding of provocation in which you provoked him just by being out on the street, and so justifying you killing him would equally justify him killing you.”

Okay, now take Jack the Ripper, put him in the middle of a crowd and give yourself a rocket launcher.

SJ stands behind the plurality of the American left to launch their weapons. There is no way to fight back proportionally without further alienating 2/5 of the country. How do you propose attacking the rock throwers in the back without taking out the non-aggressors in the front?

Easy. You say that they are supporting an infrastructure that allows continual injustice against people like you, and are therefore culpable at some level, then fire your rocket.

Having followed both movements from a distance, it is a bit of a stretch to claim that either “used their own tactics against them.”

To be fair, GamerGate was doing boycotts and going after sponsors/advertisers, which are more recent developments. Unpleasant, and justified only as a response to the other side performing said tactics themselves.

Doxxing and harassment are generally discouraged as both immoral and ineffective. Though I could get behind doxxing people that send death/rape threats. Send those dox straight to the police.

@Hadlowe

Okay, now take Jack the Ripper, put him in the middle of a crowd and give yourself a rocket launcher.

Maybe social justice should be building sniper rifles instead of rocket launchers? There’s a reason we give cops handguns and not rocket propelled grenades…

I am extremely pessimistic, because I have seen the same personality traits in action IRL apolitically. Basically there are people who due to some mental problem are happy to harp on and on forever. Instead of like normal people do make a short complaint then return to being happy, they will just spew angry words forever and they just win by wearing down everybody, because all the normal people are tired of all the tension all the time and hate their moods being poisoned by all that anger and frustration and darkness, and they and want peace and good vibes, which these people deny to them.

Solutions that have worked were either complete shunning or moderate physical violence, this later amongst children and teens as generally this is (sadly) not allowed between adults. Between children, it was amusing to see how when one kid had enough of Mr. Constantly Angry Sour Guy and just casually walked up to him and gave him a slap in the face, how efficient it was in shutting him up. It was as if a dark cloud lifting, everyboy was happy to not have their mood poisoned by that constant stream of angry bitter words.

InferentialDistance: To be fair, GamerGate was doing boycotts and going after sponsors/advertisers, which are more recent developments.

Is there some kind of consensus as to who fired first in terms of boycotts and the like? Does it matter? Here’s a graphics developer swearing off of developing for Intel because they, in turn, responded to a threatened boycott from the ants over “Gamers are over” by pulling their ads. Way to go, everyone–you’ve politicized a graphics card vendor!

Grendel, I don’t think Jaime is talking about who escalated to boycotts in the conflict between GG and anti-GG. He means that GG came into existence for the purpose of applying boycotts to fight back against similar pressure that existed before GG, pressure that was aimed at the inchoate community that became GG. (Of course, no, there is no consensus about what is going on, anywhere, except when there are explicit boycotts. And usually not even then.)

The ants tried to boycott Gamasutra, Gawker, etc. (not Intel) by contacting advertisers and pointing them to the offending articles. When Intel removed their ads, the, hm, myrmecophobes responded by threatening to boycott Intel.

I think that threats to blacklist people out of the industry for… myrmecophilia started after the boycott campaigns, but a few cases may have started before that.

@Caue – “I think that threats to blacklist people out of the industry for… myrmecophilia started after the boycott campaigns, but a few cases may have started before that.”

Some background: starting around 2007-2009, a confluence of factors (snowballing success of Steam, XBOX Live Arcade, emergence of smartphones) boosted the indie games community into prominence. At around the same time, the “games as art” debate was hitting a number of peaks. This drove a lot more gaming press attention toward the indies, right at the moment that the indies actually had platforms opening up to allow them to capitalize on their innovation in a serious way. Indie got huge almost overnight, and started reeling off hits at an amazing rate. Triple-A titles were mostly stagnating and mainstreaming, so all of a sudden, there was this vision of real artists making awesome games with small teams, and getting stupid rich selling them to the masses who were starved for quality content. That change, turning toward vision, toward art, changed the conversation in the Games Press to talk about what games could be, rather than what they were. It brought in idealism, and it brought in a healthy heap of SJ along with it.

For a while, everything was fine. Awesome games kept rolling out, people loved them, the Indie scene was on a roll. That question about what games could be was always in the background, though, and it started metastasizing into what games SHOULD be.

The Gaming Press over this period moved from being essentially a reviews system to being actual critique, on the pattern of film or literary critique. Like those fields, the Games Journalists grew to see themselves as curating and critiquing culture in general. They saw games as an important driver of culture, and began arguing that it should be driven in the directions they felt were best. For a long time, there wasn’t even a debate; on the one hand you had the triple-A world raking in millions on repetitive drek, and on the other you had a community of artists and designers and critics trying to make a better world. There was disagreement over the best way to approach this goal, and the various debates over ludo-narrative dissonance and games as learning tools show glimmers of the growing feud, but for a long time it was a mostly happy community.

The wheels started coming off around the time Sarkeesian arrived. She wasn’t the ringleader or the cause or any such nonsense, but the model she used became the new standard. In short, the Social Justice types made their critique and achieved nothing, so they escalated and escalated again, and Sarkeesian arrived right around the point where it stopped being a polite conversation. From the beginning, Sarkeesian was not interested in dialogue. She closed comments sections and refused any forum for debate, and the press frequently complied by likewise censoring comments on any article about her. Critique of her factual and ideological errors was shamelessly conflated with the bigotry spewed at her by random trolls. The community had experienced several nasty skirmishes with Social Justice already, including the Penny Arcade Dickwolves shitstorm, and Social Justice ideas like Privilege and Problematic Ideas and Expressions had been simmering for some time. With Sarkeesian, it became open warfare. Lines were drawn, allegiances were questioned, demands were made. The Press participated and encouraged this behavior, but a large majority of the fans refused to play along. The press grew more and more hostile to the fans, and the fans grew more and more hostile to the press.

Then the Zoe Post happened, and the Press decided it was time to fish or cut bait. The “Gamers Are Dead” articles were a call to split the community, purge out and wall off what were presumed to be the minority of heretics and barbarians, so that the decent and humane true believers could build a new community free of the old taint and corruption. It was a “with us or against us” moment. The clear implication was that those who disagreed with the new regime would be destroyed. Their subsequent actions show that it was not an idle threat.

TL;DR – A relatively small group of ideologues declared themselves Hegemon over a large and diverse community, and attempted to destroy anyone who disagreed or resisted. The community responded in kind.

People, in general, don’t WANT actual solutions to poorly-performing teammates. They want to be able to punish poorly-performing teammates – ESPECIALLY when they are worried about being seen as poorly performing themselves.

I think the level of forgiveness depends on the level of repentance, and willingness to put things right. An apology, extracted at lawyerpoint, which in places seems fullsome enough but in others has a distinct non-apology-apology feel too it doesn’t earn much. In particular, it doesn’t earn any trust back.

Salon’s response to the pizza thing (mentioned elsewhere in these comments) isn’t an apology, but earns some recognition, although they’re still in my bad books for what they did to Scott Aaronson.

There’s no indication that these two are going to “abandon their ways.” In fact I’d predict at least one of them will be in court again for violating their settlement agreement. The best thing that could happen is that an example be made of them.

One of the two has already given up on a career in her chosen profession, retired from blogging and tweeting, and apparently contemplated suicide. And apologized and paid a share of the victim’s legal fees – which were likely substantial when compared to an entry-level librarian’s salary.

On the “let’s make an example of them” scale, I think this may count as overkill even if the other Harpy escapes with just the apology and the settlement. Enough.

I understand where you are coming from. Perhaps she is on the level with the apology. Perhaps she really does regret putting her victim through hell for the past several years.

I am less inclined to be forgiving here. Depression and suicidal ideation are awful, and I don’t mean to diminish the real suffering she likely went through/is going through. On the other hand she gleefully crowdsourced altruistic punishment on an undeserving person. She refused to recant until the last minute. Worse yet, she played the victim throughout. I can’t help but think that any empathy you are directing toward her is going to a person who has demonstrated a tendency to weaponize empathy.

@ John SchillingOne of the two has already given up on a career in her chosen profession, retired from blogging and tweeting, and apparently contemplated suicide. And apologized and paid a share of the victim’s legal fees – which were likely substantial when compared to an entry-level librarian’s salary.

All these penances may not last through three news cycles together, except perhaps the portion of Murphy’s legal fees. Even if the Harpies’ received donations all went to charity, more donations may discretely come in later.

The apology is already being discounted as forced by being “over-lawyered”. It seems their reputation among their supporters is unlikely to be harmed.

As for who was most lawyered, a senior librarian’s salary isn’t enough to hire a great force of lawyers, unless he also had a legal fund receiving comparable donations.

I read the post you linked to, and did not see any suggestion that it was related to the events we were discussing, nor any sign that the poster felt guilty, as opposed to depressed. Did I miss something?

I have now also read the apologies posted by both of the women in question. I see no evidence in them that either actually feels guilty, believes that she committed a terrible act of which she ought to be thoroughly ashamed. The general tone is that they had good intentions, now realize that they didn’t go about acting on them in quite the right way, so are retracting their claims.

For the record, Murphy was never a “senior librarian” and at the time he filed suit, he had lost his job — which one of the reddit /r/librarian posters called a “dream job” — as a result of the libel. (And he has been out of work since.)

Murphy only has a few years of actual work experience as a librarian (at Yale IIRC, which probably pays well, but he was still entry-level).

Apparently he garnered a kind of “rock star” reputation (within the library world, at least) as a result of giving conference speeches, but that’s fame/popularity, not money. (Librarian conferences, after all, are not actually rock concerts, even if they have “rock stars.”) And the fame was ruined, along with his career, by the libel. He was not done justice by a mere apology.

I have no interest in convincing these “people” of the error of their ways. The only realistic goal is for all reasonable people to realize they need to be hounded out of civil society like plague rats.

Indeed, the phrase is almost exclusively used quotationally, by some supposed bad person who endorses the metaphor. This is one of a number of subtle shibboleths that ruin the verisimilitude of the post.

This looks like a classic example of Poe’s law as applied to the SJW set. The post upthread is exactly what an author intent on satire would write, but the crazier parts of the movement could also easily produce it.

I wouldn’t actually disagree with the claim that the crazier parts of the movement could produce the substance – after all, I am the crazier parts, you all know me here mostly for my unreconstructed Stalinism and so on – but the linguistic cues seem all off to me. Of course I could still be wrong, that’s just where my intuition is.

Murphy’s corollary to Poe’s law says that the post in question will turn out to be a provable fake if you need it to be genuine, and vice versa. Using this bit as an example of the evils of social justice is as foolish as using the UVA chapter of Phi Kappa Psi as an example of the evils of fraternity culture. Smells funny, leave it be and move along.

I continue to be believe that Murphy is misguided in pursuing this lawsuit, regardless of the merits of his case [italics in original]

[…]

Even if Team Harpy were making things up out of whole cloth, [italics added] women who experience sexual harassment but haven’t recorded the whole thing on tape are going to be terrified of being sued into the streets because few harrassers are going to admit to their behavior. We need to make it easier to report harassment, not harder

A similar sentiment was expressed by one of the people who lost this lawsuit, in one of the posts that the lawsuit addressed! Here:

We can and must take a stance of siding with victims. There needs to be a super clear message that whenever someone speaks up about abuse or harassment that they’ve experienced and encountered within a professional space (conference, work, whatever) that this person will be supported and believed.

I certainly don’t “need” it to be correct, I think I posted my reservations well, and I stand by my subjective opinion based on the context of the thread that it was taken seriously and engaged with by sympathetic people.

Anyone who said anything supportive of Murphy was immediately denounced as a troll. This poster wasn’t. If it was satire, it was not treated as such.

Ultimately, there are a thousand examples of the SJWs saying far more egregious things, this is a drop in a swimming pool. This example is merely relevant specifically to the case we’re discussing, and to the whole “rape culture” argument more generally.

AFC quoted:women who experience sexual harassment but haven’t recorded the whole thing on tape are going to be terrified of being sued

This woman does carry a memo recorder (for other purposes), and thinks that women who don’t try to record evidence of something that happens to them or that they observe, are letting down their sisters.

Cell phones, keychain video cameras — most of us probably carry them already. To say “Don’t apply fairness or ask for evidence (because we are too helpless)” is … somewhat counter-productive overall.

Perhaps that eggs/omelet metaphor does tend to get used mockingly, but in and of itself its a bullet I’m prepared to bite, as are most people if it is interpreted neutrally. I don’t think many people here believe that the US military should’ve pursued a policy of inflicting zero civilian casualties for the duration of World War 2.

In principle, I’m prepared to crack eggs to an appropriate extent for any genuinely worthwhile cause. And I believe that preventing sexual harassment is a genuinely worthwhile cause.

As for this case, if you’ll momentarily humor me with the notion that it was not a false flag, the problem is that the purported metaphorical “omelet” consists of “awareness” about workplace sexual harassment, which is to come about through false claims against specific individuals being broadcasted. There is not to be a frank discussion of the actual specific details of the case and how it relates to concerns about personal boundaries/ambiguity/etc. etc. etc., because of course that would lead to the unhelpful conclusion that the accusations were fabricated.

Might I suggest a more sensible plan:

1. Steal underpants
2. ??????
3. Omelets!!!

It’s all very well to lecture me about the importance of breaking eggs in order to make an omelet. But not if they do it while not even bothering to place their mixing bowl under the egg, so that the entire contents end up just spilling onto your kitchen floor.

They’ve got a draft up, I’m not sufficiently familiar with Kickstarter to know whether that’s a good sign or not.

I’m a bit concerned about the retail price they’re suggesting – it works out at £30 (or $46) – seems a bit steep for what is essentially a novelty item. Cheaper, and it would be a great Christmas present.

I am skeptical of the no-little-ice-age post. They talk a lot about the temperature data they were looking at being white noise, and later mention that they could find no autocorrelation in the data series – that suggests to me that the data is not reflective of annual temperature, because annual temperature data has a complicated autocorrelation structure and is very red – similarly, AFAIK most reconstructions produce red noise. This makes intuitive sense – you’d expect a level of persistence in climate from year to year.

I’m not sure their logic holds, either. If a 25-year period of white-noise data by chance has more low values than average, then that 25-year period was lower than average. The noise in climactic measurements isn’t measurement error, it’s actual year-to-year variance in average temperature. If the LIA or the like is used as an argument for cyclic behaviour, their logic would hold – sometimes the noise just lines up. But as an argument for the LIA not existing, I don’t think that works.

A key point they glossed right over: they’re using annual temperature reconstructions, not any of the excellent daily series which are available freely with a Google search (e.g., the CET, Uppsala, etc.). They went from 25-year smooths back to annual smooths and said “it looks like white noise” (my response: no, it doesn’t, you have no idea what you’re talking about).

This looks to be the usual thing that happens when (warning: inter-field abuse incoming) economists think they are real scientists, and apply their pseudo-statistics to problems in an actual science with actual, real-world, instrumented data. It happens fairly often; the Freakonomics guys are a perfect case study. Just because you’re an expert in your field does not qualify you to do high-quality research in another field, especially when that field has an incredible amount of background theory and information which you need to grok to be able to make sensible inferences. And knowing something about economic time series is *not* sufficient to do any other type of time series analysis. Source: my doctorate is in time series, and I’m fully aware of how horrible most economic time series analysis is. It’s terrifyingly bad.

(so, probably a completely irrelevant article, and not worth wasting time on — similar to Freeman Dyson’s views on climate change. Just because he has a Nobel prize doesn’t qualify him to know the first thing about climatology.)

Is there anything wrong with using annual reconstructions here? I was under the impression CET is pretty unreliable prior to the late 1800s. Day-to-day variance doesn’t seem to add much to what you’re trying to find. I guess if the LIA was characterised by normal summers and very cold winters, annual records could mask that?

But yeah, if they think it’s white, either their stats are wrong, or the data is wrong, so never mind.

Fascinating post by Scott Aaronson. I’m not convinced by the lookup table thought experiment, however. To recap for those who didn’t read, it’s an enormous lookup table, bigger than the physical universe, which contains pre-computed results for your brain’s possible responses to any almost any possible combination of sensory inputs. (an example of an input could be a question asked, or for a longer sequence, your question encoded followed by the brain’s response and then the next question)

If the lookup table is finite, then there is some conceivable input (of enormous length perhaps, years and years worth of conversational data) that it fails to pass the Turing test for. The fact that it passes the Turing test and appears conscious for most inputs is really just cheating, the fact that it’s enormous means it can afford to be a really really good fake. The lookup table contains some possible responses a conscious being would give, but like a lookup table for whether a given string is a palindrome, it has to fail for some inputs since the problem can’t be solved without having working memory. The lookup table isn’t conscious anymore than a lookup table can solve the palindrome question, because no matter how large it is, it has to fail for some inputs.

If the lookup table is really large, the lookup table could have one output for every input where an “input” is the entire set of inputs a human being receives over a lifetime. A lookup table like this would pass the Turing test.

I don’t think restricting the size of the Turing test is a valid reduction, any more than you can restrict the size for any other computable problem. The palindrome problem can’t be solved in constant time and space for all inputs, even if a really large lookup table gives constant time and space for a large number of inputs. The Turing test is at least as hard as any other class of computable problem, since you can ask any computable problem in the framework of the Turing test. The fact that you have precomputed answers available for a large number of inputs doesn’t change the fact that the lookup table fails for the vast majority of all possible inputs. A brain kept alive indefinitely would eventually distinguish itself from the lookup table.

A tiny script could solve the Turing test for all words of one letter. A system that could truly solve the Turing test would be capable of succeeding for all inputs if it was allowed to run indefinitely.

This looks like something that Yudkowsky touched on; I see that in the comments, Aaronson responds to his idea (though I don’t see anyone quoting him): if something made the lookup table, then the lookup table is just an indirection layer atop the mind that built it. So Aaronson then asks, what if the lookup table was built from a small seed? But that just offloads the origin of things into the seed itself; you are, in whatever sense you care about, shifting the substrate of a regular mind to a lookup table, like a VFX artist “baking” a texture.

How is this fundamentally different from copying a mind into a computer and asking if that is sentient? Yes, you had to use an existing mind to make it, but once you’ve created it, you can question it independently.

That’s a little different than my point. The lookup table is simply a recording of the mind’s responses for a finite number of inputs. A true simulation of the mind would work for any possible input, the lookup table is by definition finite.

My general point is that Yudowsky’s GLUT would be distinguishable from a conscious entity if you dig deep enough, it can only contain a finite number of responses, while a conscious system should be capable of anything. You could be fooled by a process that simulates consciousness if you start talking to a recording that lasts a second, have a short conversation with a chatbot, or have a larger conversation with a lookup table. But eventually any entity that just stores canned responses will be limited in ways that a conscious system isn’t, even if the lookup table has a century’s worth of data.

Are there really an infinite number of things I could observe over the course of my lifetime? Unimaginably huge, sure, but infinite? In my intro linguistics class I was taught that language is infinite since you could say “I really like pizza”, “I really really like pizza”, “I really really really like pizza” and so on, but at some point you run out of lifetime in which to hear the “really”s.

But really, we don’t need to bother with a whole lifetime. Are there a finite or infinite number of inputs I could come across in the next minute?

There are a finite number of things that can be experienced by the lifetime a human body typically lives through, but if your brain/consciousness were kept alive indefinitely there’s not necessarily an upper bound on the responses you would be capable of having and the extent to which you could evolve, while a lookup table would have limitations.

To put this all another way, the lookup table is in practice indistinguishable from a conscious entity under normal conditions. But if we’re going to admit abnormal conditions which allow the lookup table to exist, it seems only fair to allow abnormal conditions (a test that lasts for a thousand years) which exposes the lookup table’s flaws. The fact that you can devise a thought experiment for a machine that fools most people doesn’t mean we have to change our definition of consciousness if another thought experiment shows where the machine breaks down.

“Lookup table” sounds like the way I think of my mother’s thought processes. Unfortunately, she often returns responses that don’t make much sense based on the original inputs–they are only tangentially related. For example, an input of “Russian Revolution” returns “Tolstoy.”

One of the things that strike me about these debates is the assumed binary nature of consciousness. This seems to me to be arbitrary and anthropocentric. Much in the same way it used to be completely unthinkable that anything other than humans was intelligent, and then found all sorts of animals pass various bright line tests for intelligence.

Consciousness seems to me more easily considered an emergent property. Humans, dolphins, dogs, cats, mice, birds are all easy yesses. But at what point would consider a living organism not conscious? Ants? Lobsters? Fish?

Is a bacteria conscious? Individual cells in our own body? If no, is there a bright dividing line? Or is more like dropping black paint in white? When does it stop being white and start being gray?

Consciousness seems to me more easily considered an emergent property.

“Emergent property” seems to me more easily considered a redundant term.

My suspicion is that the correct question is not “does X have consciousness?” but “does X use narrative processing?”, with consciousness as we now think of it going into the bin of things that do not actually exist, like “sky.”

This still seems to avoid the point I am trying to make. Some processors will spend more computational power on narrative processing, others less. That which spends very little power on narrative processing is only dimly self-aware, and it’s self awareness bears little resemblance to our conscious experience.

They could have thought this through. They should have thought this through. It does seem sadly plausible, however, that they were not in fact thinking at all when they chose this path, and so remained (wilfully) ignorant of what they were setting in motion.

Leveling the accusation may have been a factually-independent decision for them, an action that testified to a higher truth about male/female relations without regard to quotidian details like whether this man was guilty of these crimes.

At the very least, it seems certain that they had no idea that they themselves might face consequences for the false accusation.

Then again, maybe it’s too charitable to think that, after a period of self reflection, Nina believes now that at the time of the accusations she had no idea what she was doing; perhaps it’s just that the alternative is too painful.

The wording puzzled me, as I suspect it puzzled you, and it is not the word ‘unwittingly’ alone. The tone swings so widely within the apologies, with parts of them being perfectly clear-cut: “no factual basis at all”, “unreservedly apologize”, “based on gossip and innuendo, not facts”.

Yet other parts of these public apologies sound, to put it mildly, insincere: “[w]hile I continue to feel that [sexual harassment] is an issue that we must all address, I do now realize that Mr. Murphy was the wrong target for my post”, “I was ill prepared for the damaging impact of these unfair statements”, “mistakes have been made”. Each of those would read comfortably alongside the classic nonapology: “I’m sorry (that I got caught).”

Perhaps the former words came from Murphy or his lawyers, and the latter from the two women. Or perhaps that gives them too little credit for reflection, repentance, or even simple sense.

One wonders also what consequences, if any, follow the three parties to this case, and whose careers suffer lasting damage.

“They could have thought this through. They should have thought this through. It does seem sadly plausible, however, that they were not in fact thinking at all when they chose this path, and so remained (wilfully) ignorant of what they were setting in motion.

Leveling the accusation may have been a factually-independent decision for them, an action that testified to a higher truth about male/female relations without regard to quotidian details like whether this man was guilty of these crimes”

I suppose it is theoretically possible that these two psychopaths were genuinely “unwitting” about the damage they caused to Murphy. In the same sense that I could deliberately swing a hammer directly into a man’s face and not intend to cause him harm; after all, my only intention was to swing a hammer into his face and any consequence thereof was secondary.

All versions of privilege are obvious examples; the plight of women, blacks, sexual deviants etc, are the collective responsibility of anyone who fits into a crudely constructed oppressor class, with little to no regard for their actions as individuals.

Most discussions of (sexual) objectification work on the same sort of principle. Is an individual comic artist bad for drawing sexualized women? Is an individual reader wrong to enjoy these drawings? Not really. But these two individuals are part of a larger culture, where female characters supposedly serve as nothing but fetish fodder, and these practices in aggregate exclude female artists and female readers. So while not wrong as individuals, these individuals are still guilty, because they inherit the collective responsibility of comics to be egalitarian or inclusive.

The term ‘mansplaining’ is of course a riff over the same chords. It is not imperative to show that the explanation is wrong, irrelevant or untimely, and to thus show that the individual should have kept his silence. Instead, a ‘mansplanation’ is inherently wrong, by virtue of contributing to a larger culture of men crowding out women’s opinions.

The part of me that comes up with EBW-but-satisfying daydreams would like to offer them a simple choice: “You can take personal responsibility for your actions, and face the consequences. Or you can place the blame, loudly and publicly, on your community, and avoid personal consequences. Your choice.”

As mentioned elsewhere in this comment thread, it should be realized that this was not an isolated incident. Several of these cases were brought forward at around the same time, and much furor was raised about them. Google “Listen and Believe” for a sampling of the background conversation.

I hesitate to call their original accusation “scripted”, but it was very nearly that.

I never understood libertarians on such issues. First and foremost: What kind of believer in markets are you, if you think higher price won’t decrease consumption? Even in absence of evidence, you should assume that soda taxes reduce consumption because it is really, really rare that higher prices don’t reduce consumption.

The strongest thing Bellemore actually can argue is that it doesn’t reduce consumption much at the levels they’re currently at. Which is a bad argument for removing them, unless you also buy the vulgar-libertarian motivation he also hints at, that this is just a ploy by government to get more tax money.

But what is the big deal here? Provided you agree with the policy goal of reducing sugar consumption through meddling with incentives, then surely it’s better to do it in a way that makes money, rather than costing money? It would matter that it’s not very effective in an absolute sense per percentage of tax, if it wasn’t cost-effective. But it clearly is, as it’s revenue-positive. That being the case, low price elasticity of sugary drinks is an argument for a higher rate, not removing the tax altogether.

If the size of government measured in dollars is what’s really bothering you, you can always give that money back in the form of some tax break on something else.

Provided you agree with the policy goal of reducing sugar consumption through meddling with incentives…

We don’t, and also aren’t immune to the glee of getting to nail someone with their own arguments. If soda taxes are regressive and don’t work, then yeah, it’s sort of interesting that they don’t work, but not nearly as interesting as that we now get to hold paternalistic busybodies over the coals for hurting the poor.

Provided you agree with the policy goal of reducing sugar consumption through meddling with incentives…

We don’t,

But there’s the rub, isn’t it? It’s only if you think government making money is bad in itself, and government caring about public health is bad in itself, that this article has any kind of point at all. Otherwise it’s just providing arguments for higher sugar taxes.

Hurting the poor is a point the article doesn’t make. But if hurting the poor is what you’re really concerned about, I for one am perfectly happy with using the tax revenue as a no-means tested straight up cash payment to anyone living below the poverty line.

Fully refunded pigovian taxes is a terrific idea which in theory, everyone should want to get behind. But actions speak louder than words, and few do.

I’m concerned that they add the tax and make it refundable but then they keep raising the tax as a way to raise revenue. I don’t think that’s an unreasonable suspicion and it’s why I don’t trust these “revenue neutral” proposals.

It’s only if you think government making money is bad in itself, and government caring about public health is bad in itself, that this article has any kind of point at all.

I don’t have a dog in this fight, but it seems to me that there’s a far more reasonable interpretation: taxation is negative in itself, public health is positive in itself, and if you aren’t getting enough of the latter for the former, then it’s a bad initiative.

I’d expect the returns on this sort of sin tax to be strongly nonlinear, so invalidation at a certain price point doesn’t necessarily sink the whole concept — but it’s certainly evidence against it.

Even as a libertarian myself, I agree with your criticism of many libertarians on this issue. When you oppose a particular policy, it’s very easy to fall into the trap of using any possible argument against that policy one can find.

In a similar vein, I’m very sceptical when libertarians claim that drug legalisation would lower the cost of drugs while at the same time not increasing consumption. But demand curves are almost always downward-sloping, so my prior for that is very strong and it would take lots of strong evidence to convince me otherwise. (For the same reason, I’m also very suspicious of studies that purport to show that raising the minimum wage doesn’t reduce employment.)

I concur with you that if we agree with the policy goal of reducing sugar consumption, then a tax is the best way of doing it. But I (and most libertarians) don’t agree with that policy goal. I don’t see any compelling reason why the government should interfere with people’s diet. When people drink sugary drinks, they themselves get the benefit (pleasant taste) and pay the costs (worse health outcomes). Third parties are only affected in very roundabout ways.

libertarians claim that drug legalisation would lower the cost of drugs while at the same time not increasing consumption.

The claim is usually that it wouldn’t increase the number of drug addicts, which is a very different claim than “would not increase consumption.” If everyone can cheaply purchase the least harmful drugs, and addiction treatment is available without massive stigma and legal paranoia, then consumption goes up but harm, which is what we cared about, does not.

There is no legal paranoia around addiction treatment in Norway, and probably less stigma around it than being addicted in the first place. It’s still expensive and doesn’t work well.

(By the way, how much stigma is there really around addiction treatment in our cultures, when half of Hollywood have been through it?)

Why do you think people would gravitate to the least harmful drugs? I guess in practice, you mean people would switch from alcohol to cannabis. But inside the group of illegal drugs, or inside the group of legal drugs, I see little evidence that people “rationally” gravitate to the least harmful drugs – at least if we speak of harmful in the addiction sense.

Addiction treatment is expensive and unreliable. Look to the Hollywood stars again, who can afford whatever it costs. I see no plausible mechanism why we would suddenly get much better addiction treatment under drug legalization. Addiction treatment for legal activities/substances (gambling and alcohol) has the same problem of being pricy and ineffective, as treatment for illegal drug use has.

(By the way, how much stigma is there really around addiction treatment in our cultures, when half of Hollywood have been through it?)

According to a personal friend, a doctor’s wife who’s had to battle a years-long painkiller addiction, tons. The reputational hit was severe enough to warrant moving to a different city.

Why do you think people would gravitate to the least harmful drugs?

Because people are modeled as rational actors until proven otherwise. Our existing data says that the specific drugs people choose are determined foremost by their relative ease of availability, but also suggests that people consider how bad a given drug’s side effects are as a secondary concern. e.g. Crack never made a comeback even once it was price-competitive again, because people knew that most of the crackheads had died of it and avoided it for other options.

If all drugs were equally available, we would therefore expect people to choose the least harmful ones.

I agree with you completely. The whole idea of “Stop the war on drugs because it hasn’t eliminated drug use” is really bizarre. It would be like trying to legalize murder because we haven’t managed to get it all the way to zero. Even if murder rates go up(like it did in the 70’s and 80’s) you don’t just give up trying to enforce the law. The argument for legalizing drugs is not that drug use would plummet. It’s that the costs of enforcing drug prohibition are not worth it(a very controversial proposition of course).

In the Irish budget, the so-called “old reliables” are the goods that have VAT increases put on them: petrol, alcohol and tobacco.

The argument (put forward by anti-smoking groups) is that, by making cigarettes ever more expensive, you are going to encourage people to give them up. Except that this hasn’t happened in sufficiently large numbers to make a reliable source of revenue for the government collapse; instead, what you get is smuggling.

“Soda taxes” won’t make people stop drinking soda; they’ll drop something else out of their diet instead (probably cut back on what are the expensive items, that is, fresh fruit and vegetables).

It absolutely has happened that people smoke a lot less than before. Disentangling the effect from taxes, scare campaigns, prohibiting ads etc. is obviously not easy. Sure, there’s smuggling, but smuggling is costly in other ways (risk of getting caught, maybe risk of getting murdered by a competing cigarette-smuggling cartel…), so it does not fully negate the effect of increased price.

And as I argued below, and which has been shown many times (I linked to one of the studies), even heroin is price sensitive. If even heroin addicts use less of it and use more substitutes when it gets more expensive, I expect smokers do too.

As for alcohol and tobacco, taxes (at the levels they’re currently at) do not stop people from consuming those goods altogether, but taxes do have an impact on consumption:

“it is well established that the earlier a person starts to drink, smoke or use illegal drugs the higher the risk of later abuse (Hawkins et al. 1997; Foxcroft et al. 2003). […] There is evidence that people drink less if the price of alcohol increases (Room 2004) and that those of particular concern, heavy drinkers and young people, both respond to price increases by drinking less (Sutton and Godfrey 1995). However, the effectiveness of raising taxes is not clear cut, with consumption of popular drinks such as beer and wine being less responsive to price rises than other alcoholic beverages.”

(The Psychology of Lifestyle, Thirlaway and Upton).

“Effectiveness of Population-Wide Interventions Reducing the rate of tobacco use worldwide is one of the most important health care goals for the prevention of chronic diseases, including CVD [cardiovascular disease]. Tobacco control and prevention policies described in the literature as population-wide prevention strategies have proved very cost-effective. Although the estimates in the literature are subject to local variations and each country is guided by local policies, increasing taxes on cigarettes and tobacco has been found to be the most cost-effective antismoking intervention [20, 45, 50, 56, 57]. Furthermore, interventions based on tobacco taxation have a proportionally greater effect on smokers of lower SES and younger smokers, who might otherwise be difficult to influence. Several studies suggest that the application of a 10 % rise in price could lead to as much as a 2.5–10 % decline in smoking [20, 45, 50, 56].”

(A Systematic Review of Key Issues in Public Health, by Boccia, Villari & Ricciardi). Note that influencing young people in particular is important both in the smoking and alcohol context, because young people drink more and “It is usually teenagers who experiment with smoking, with very few smokers starting after the age of 25 years (Piasecki 2006).” (Thirlaway and Upton).

I doubt a (high) soda-tax would stop people from drinking soft drinks, but judging from the related public health literature there’s every reason to believe such taxes will have an impact on consumption if set at the ‘right’ level.

it is well established that the earlier a person starts to drink, smoke or use illegal drugs the higher the risk of later abuse

In the absence of controls designed specifically to address the issue, I’d strongly expect this to be capturing differences in impulse control rather than anything intrinsic to mechanisms of abuse. In which case interventions designed to prevent early use won’t do jack from a public health perspective.

That first sentence was actually unnecessary to include in the context of the point I was trying to make (taxes affect consumption patterns) and I’m actually not sure now why I included it in my comment. You’re of course correct that there’s probably an issue here. It’s in that context interesting to note that many interventions aimed at young people in the context of smoking and alcohol have actually failed to give positive results, especially with proper (long-term) follow up. Taxes are different – this is one of the few interventions we know will have an impact, which is why they’re considered cost-effective in CEAs in both the alcohol and smoking context (Boccia, Villari & Ricciardi). A related quote:

“Many interventions to encourage sensible drinking are aimed at adolescents and young people with the goal of preventing the establishment of unhealthy drinking habits. The rationale for a predominance of interventions for this age group includes the indisputable fact that young people are the heaviest drinkers in society […] Many early drinking interventions are educational in nature. In essence these are risk communication messages and the evidence from psychological research is that improving risk perceptions will have little impact on levels of drinking. Unsurprisingly then, there is little evidence that alcohol education and health promotion have any positive effect on drinking habits in Britain […] These campaigns are heard and understood because knowledge increases in targeted populations […] so it is not that the message is failing to reach the designated audience, rather the message has no impact on behaviour. […] Foxcroft et al. (2003) reviewed the effectiveness of programmes designed to prevent excessive drinking in young people. Worryingly, [they] found very little evidence that any of these programmes were effective. Among the studies with medium-term followup that met the methodological guidelines the majority, 19 studies, found no evidence of intervention effectiveness. Several of these studies had previously reported short-term effectiveness which demonstrates the importance of longer term follow-up. […] There are two concerns from these studies on early drinking interventions. First, there are a wealth of studies that report no reduction in any measure of drinking. Second, research has failed to consistently test and tease out what is effective.” (Thirlaway and Upton).

Increasing alcohol taxes in general is not considered an ‘early drinking intervention’ in this context, but if it were the conclusion above would change slightly for reasons already mentioned.

“capturing differences in impulse control rather than anything intrinsic to mechanisms of abuse. ”

One thing to note, though, is that there are differences in impulse control over the life span of an individual, and that both intrapersonal and interpersonal impulse control differences may be important to consider in an intervention context because of path dependence aspects of consumption. Teenagers are in many contexts less risk-averse than are people in their 40es, for reasons which are partly hormonal, and parents do lots of things to stop them from doing stupid stuff during that period of their lives which they’ll regret later. The observation that most people start smoking before the age of 25 may be related to young people being more sensitive to social pressures, and once people are older they’re no longer stupid enough to take up that habit even if their friend smokes.

I’m not a big proponent of high taxes in this context, but it’s worth keeping in mind that even if people’s preferences are fixed (educational approaches seem to suggest this to be the case for practical purposes in an intervention context, at least in the British alcohol context), it’s not like you can’t take those preferences into account when designing interventions and perhaps still impact behaviours in the long run.

I’d strongly expect this to be capturing differences in impulse control rather than anything intrinsic to mechanisms of abuse.

Why do you strongly suspect that? Do you really think it makes no difference in life outcomes if, for instance, a thirteen-year old quits organized sports and gets involved with older partying teens instead?

To me this represents a ridiculous bio-determinism. It’s not just biodeterminism, it’s genetic determinism at the expense of other biological factors. Don’t you think it has any effect on your brain’s development at which age you start drinking regularly?

Scott Aaronson’s ideas about consciousness are interesting… but I’m not biting.

The bit about how — given his notion of consciousness — this would mean that copies of you are not conscious, and wouldn’t act like you because they don’t share your quantum-mechanical microstate, seems very much like motivated reasoning. “I don’t _want_ to believe that you could make a copy of me that would behave exactly like me… so I’ll come up with a theory where you can’t.”

But in practice, you can easily test what happens when you make an approximate copy of someone’s brain state: Just go have a conversation with someone who has profound anterograde amnesia (loss of ability to produce new long-term memories.) Then leave for a few minutes and have the same conversation again. This should dissuade you from any notion that humans are especially free-willed or unpredictable creatures.

There are few people who permanently suffer this condition (and how fortunate that is, since it’s frankly incredibly disturbing, at least to me, and tears away a lot of illusions about the nature of consciousness and the self), and they tend to get studied heavily by psychologists. But plenty of people suffer short-term anterograde amnesia due to things like car accidents and surgical anesthesia. My understanding is that their behavior is generally very predictable, and makes for a great parlor trick (I shudder to say this since I find it kind of horrifying to imagine happening to me, but I find that observers tend to describe being entertained by the effect): Say something to them, then wait a few minutes for them to forget, and say the same thing — it will yield the same response with frightening accuracy and repeatability.

Some of my friends like to tell a story from the aftermath of a car accident, in which nobody was seriously hurt but a friend of mine hit her head. While standing around afterward, she was handed a cup of water, and amused herself by throwing it on another mutual friend and laughing… and then this entire interaction was repeated verbatim multiple times, to the amusement (???) of all.

I am, and have been for a very long time, trending inexorably into the p-zombie camp. I see no compelling reason why I should believe that I have the sort of consciencesness (free will, whatever the nom de jure is) that other people seem so god damned determined to believe they have. I’m not entirely sure it is in good taste to make the God analogy, but in either case as we learn more and more about the universe it, the unexplained space into which we can imprint these preexisting notions shrinks by spades. The heroic effort needed to do so explodes exponentially and at some point it may be instructive to take a step back and really look at the edifice you’ve built. By carefully considering your (perhaps unconscious) axioms you may find that striking a few leaves a coherent system with an ineffable elegance that a mathematician (?) like Scott Aarronson should appreciate. Without losing an iota of descriptive power.

I have an inkling that what you mean is actually eliminativism of the Churchland brand, which doesn’t have all that much to do with p-zombies. Or do you think that you’re the only conscious being and everybody else is just a p-zombies?

I don’t understand why unpredictability is conflated with free will, as if somehow being predictable makes you an automaton.

The people who are most unpredictable are generally those who have the least free will (in the conventional sense). They are those who have the most severe mental problems.

Are you the kind of person who thinks that you should make decisions by rolling a die you have in your pocket? Would you think that somehow who made decisions that way had more free will than someone who used their innate characteristics along with their accumulated knowledge to make decisions?

Well someone who does the latter is going to be predictable, even if they have perfect free will.

Didn’t the accident victim notice, in round 2, that the friend was already wet? Other than that, though, I don’t find it disturbing. I pride myself on my predictability (predictable by my friends – and my enemies are located far away, e.g. in Al Qaeda camps, so no worries on that front). It makes me easy to locate for get-togethers and such. It’s also a sign of stable and relatively elegant value and belief structures.

I know this isn’t an open thread but has anyone else read “Sapiens: A Brief History of Humankind”? I decided to read it based off some rave reviews and it was incredibly awful. Everything was either trivially true or really stupid. I can’t remember at any point thinking “wow, that was really insightful”. It got so bad that I started highlighting all the dumb things he said. Here’s a ridiculous one in Chapter 8:

“Yet it’s a proven fact that most rich people are rich for the simple reason that they were born into a rich family, while most poor people will remain poor throughout their whole lives simply because they were born into a poor family”

No citation of course, because who needs a citation for a “proven fact”?

And in the same chapter, he wonders why the patriarchy is so universal. Some say that it’s because men are stronger. But that can’t be right because “There are also many women who can run faster and lift heavier weights than many men”. He says something more sensible in the next paragraph but it still baffles me why he would actually put that in his book. And there isn’t even anything good in it that can’t be read in some other Big History book. I honestly don’t see what people like about it.

most rich people are rich for the simple reason that they were born into a rich family

What exactly is “simple” about this explanation? Even if someone would prove it statistically, I still wouldn’t know whether being born into a rich family helps you become rich because:

a) you are likely to inherit genes that help people become rich in given environment; or

b) you are likely to receives memes that help people become rich in given environment — so even if later you would lose everything except your memory and habits, you could get rich again by behaving the right way; or

c) your parents will likely make some important life choices for you during your childhood (just like their parents did for them), and the consequences of those choices will help you become rich for the rest of your life; or

d) your parents will protect you and throw tons of money at you for some time — with all obstacles removed, you can more easily learn and reach your true potential, and later you are already able to support yourself with thus gained skills; or

e) your parents will make you a member of an elite conspiracy (which could be just your extended family, but maybe it is much larger) that keeps protecting each other during their whole life — so you actually never have to develop any skills, and will still remain rich forever.

Five minutes of thinking, five different (not mutually exclusive) possible explanations for how specifically people “born into a rich family” remain rich, each of them giving different model of the society and predicting different outcomes in case of various interventions.

Like most issues in the social sciences, there’s always an abundance of counter evidence for almost any side of an issue, and a single correct consensus is almost impossible.

Why do people loot during riots? because law enforcement is distracted and have their hands tied. This assumes people are rational and weigh risks and reward. Irrational people would not be deterred that way.

Your study just shows what individual offenders do in the first year after a longer prison term — exactly as expected. The tendency to antisocial behavior declines steeply with age. Someone released from prison at age 30 would be less likely to commit another crime than a similar inmate released at age 27.

I have an economics degree, and I certainly have some investment in the model that rational actors are motivated by incentives. In my comments here and elsewhere, I often criticize perverse incentives that promote bad behavior. And certainly at least some percentage of crooks are rational actors.

Still, it’s hard to see how the prison boom has led directly to plummeting teen pregnancy rates and other features of the “good behavior epidemic” cited in the original linked article.

If heroin is price sensitive, probably sugary drinks are too (as indeed, the study does not disprove). This is good news for fans of capitalism. The more things that are completely insensitive to price, the more all our positive assumptions about the market are undermined.

Longer sentences, awful prisons, and the highest incarceration rate in the world

still has very little proven effect. Immediacy and certainty of sanction still dominates, and can’t be compensated for by harsher sanctions.

Andrew Ng’s claim is that the scale of AI that Eliezer is concerned about is so far away there’s no point worrying about it now. Similarly, Aaronson is arguing agains Kurzweil’s optimistic projections. Eliezer is arguing that the timing of an AI breakthrough is extremely uncertain – we have very little information about how hard the problem is – and that because of the likely unpleasant consequences if we’re don’t think very hard about it, it’s a good idea to think very hard about it now.

There’s also a concern that if AI research is hard, then AI value research is even harder. Not only does it have to deal with many difficult issues that are in the field of AI research, we’ve worked on some of the underlying philosophical problems for three thousand years, and our best bets on the matter seem to be religions and Peter Singer.

It’s also hard to estimate time periods for normal software development, and much harder when you don’t even know what the shape of the output or even general shape of the mind might look — especially since a dangerous AI might not even be human-like, sentient, or (hopefully) conscious.

I think we know so very little about strong AI right now that there’s almost nothing useful we can say about friendliness, so I don’t see the point of working on it. Once we know how to build a strong AI, we can think about friendliness (before actually building the strong AI, naturally).

I’m also entirely unconvinced by Eliezer’s box experiment. The fact that the logs are not public is highly suspicious, and makes me think there is cheating involved (e.g. meta-arguments about how it’s good for publicity if Eliezer wins, etc.)

I’ll read it more carefully later, but after a cursory pass I don’t see much that is likely to prevent strong AI (should it be developed) from destroying the world AND isn’t being worked on by mainstream researchers (e.g. verification has been an active area of research for decades).

Can you save me some time by pointing out specific research directions you think are important?

“Somebody else might do it” is a much stronger reason than you might think. There are benefits that accrue if you can make a contained hyperintelligent AI. There are reasons for people to do it. And if the first person to do it didn’t bother with the whole Friendliness problem, chances are they didn’t get it right by accident. If you do it first, and got Friendliness right, the new contained friendly hyperintelligent AI will be a rather good tool for not having unfriendly hyperintelligent AI popping up.

SA’s “Molach” argument pretty compellingly argues everything is doomed forever WITHOUT a singleton (Single unrivaled power). So you could phrase it as “Either everything is doomed for sure, or everything is probably doomed, but maybe fixed forever.”

I fondly remember reading Godel, Escher, Bach in the 2000s and coming across the parts that were pretty sure chess-playing programs wouldn’t be able to compete with humans without playing in a human-like way (intuiting several candidate moves and then considering the candidates several steps ahead for positional properties, rather than just running minimax on a tree umpteen ply deep).

Well, obviously strong AI will be possible *eventually*. I thought that was obvious to everyone here. But Andrew Ng and Scott Aaronson estimate the time it will take in centuries or millennia – i.e., they’re saying it’s a *really really hard problem*, and that the chance of it being solved soon is close to zero. Therefore, according to them, there is no point in being concerned about it now.

I agree with them, and moreover, I think that there’s almost nothing useful to be said about AI friendliness in our current lacking state of knowledge about strong AI.

I’m pretty sure brain emulation and mind uploads will be here a lot faster than 100-1000 years from now. And once you get those working you are essentially at the same place you would have been if you were making general intelligence from scratch.

“I think that there’s almost nothing useful to be said about AI friendliness in our current lacking state of knowledge about strong AI.”

Friendliness is about figuring out how we can make explicit human utility functions and program them into a machine. An AI isn’t necessary to questions of how human desires can be written out.

I don’t see why mind uploading should be any easier than, e.g., a cure for aging.

Anyway, if AI is achieved by brain emulation, then the AI would already have a built-in human utility function. So friendly AI research wouldn’t make a difference.

As for friendliness: it seems to me that to code friendliness into a machine, one needs a semantic model of the world. But this is one of the main things that we don’t know how to do in AI! What’s the best way for an AI to understand “a chair” (let alone “a person” or “happiness”)? This is a difficult problem.

We already have a brain emulation model for roundworm that you can download on your computer. We have trouble with more complex stuff, but I think brute forcing works for this- the issue is having enough computing power to run it.

The linked article is not attacking those ideas, but the idea that understanding in biology was currently on an exponential take off or that you can do mind uploading with nanites.

I’m talking about destructive uploading and lab animals. I don’t think his objections apply to that.

We already have a brain emulation model for roundworm that you can download on your computer.

I think this is a strike against whole brain emulations happening within the next few decades. We’ve known the connectivity of the nematode for decades, and yet emulations still don’t behave like real worms.

A quick Moore’s law’s extrapolation doesn’t give us enough power to simulate a human brain in the next 40 years at the level we are simulating nematodes (a 1,000,000 fold speedup isn’t enough), and I don’t think most people expect Moore’s law to hold up that well.

Also simulations don’t scale linearly, you need more than a two times speedup to simulate a system twice as big.

This is even assuming that brute force is an option, but the techniques used to determine a worm’s connectivity and neuronal behavior won’t work on humans since unlike worms, human brains are not homogenous.

“A roundworm has 302 neurons and <10k synapses. A human brain has ~86 billion neurons and ~100 trillion synapses. We're off by a factor of like 10 billion – it isn't close at all."

We aren't going to be modeling the human brain anytime soon- a good simulation is a person. Much safer to run parts of the brain or smaller animals.

"Anyway, once again, if AI happens via brain emulation then most of the friendliness research is useless, because the utility function of the emulation would be hard-coded in."

Unless we figure out how to reprogram the emulated mind, which is going to be the number one goal.

"We’ve known the connectivity of the nematode for decades, and yet emulations still don’t behave like real worms."

At the risk of pointing out the obvious, that means our models are wrong. Unfortunately I'm not aware of a model that tells us how quickly we improve our models so I can't say if this is something that will be solved soon or something that will take a lot of time to deal with.

Er, to be explicit, I think progress will go quickly once we figure out what we are doing and that more computing power will speed up how quickly we can figure out what we are doing. I have no idea what the background time for understanding the brain is though.

"Brute force might work eventually, but probably not soon."

I think you sketched the outer limit of time it would take, not the average. Crows are rather intelligent and their brains are a lot smaller.

What makes you think that after we have an emulation for a human (or a crow), we’d be able to figure out how it works and make modifications? That seems like it might be a ridiculously difficult task.

Also, while I can’t find a source for a crow, even a mouse has 100 billion synapses, which is 10 million times more than a roundworm; and as you pointed out, we don’t even have the right model for the roundworm!

All in all, I would say whole-brain emulation is unlikely to be possible in the foreseeable future (and even if it were, friendliness research is unlikely to help).

Alexander, it is not true that there are simulations of the nematode that don’t act naturally. It is simply false that there are any simulations of it at all. Knowing the connectome is hardly anything. We don’t know how complicated a model of the neuron we need. Maybe the very simplest model will work, but that has not been tested. You cannot just combine a model of the neuron with the connectome and get a model on the brain, even a lousy model. You have to have a lot of data about each synapse before you have a model. At the very least, you need to know whether the synapse excites or inhibits.

“What makes you think that after we have an emulation for a human (or a crow), we’d be able to figure out how it works and make modifications? That seems like it might be a ridiculously difficult task.”

Because if you can’t do that, you can’t model learning.

“Also, while I can’t find a source for a crow, even a mouse has 100 billion synapses, which is 10 million times more than a roundworm; and as you pointed out, we don’t even have the right model for the roundworm!”

They did run half a mouse brain on a supercomputer for half second (simulated) time.

Yes, it didn’t work (as wiki puts it “the output lacked structures seen in actual mice brain), but I think they’ll get farther if the can do models repeatedly until they match observed reality. Or at least have a full brain emulated- pretty sure the other half is important.

” It is simply false that there are any simulations of it at all. ”

Terrible simulations are still simulations.

“Maybe the very simplest model will work, but that has not been tested.”

What? Wiki is pretty clear that the simplest model of the neuron has been tested and doesn’t work.

The only way this makes sense if you are referring exclusively to higher quality/detail/robust models and the people in the simulation field are trying things simpler than that.

There are many people within the field of machine learning who believe that safety issues for future artificial general intelligence are important, and needs to be worked on. Take a look at the Future of Life Institute’s open letter.

That open letter is very mild in its statements, and says nothing about catastrophic risks. It’s likely that many of those signatories don’t actually consider strong AI to be an important risk to think about.

Andrew Ng at least accurately portrays the feeling in the AI community that the problem is *really hard*. The survey seems consistent with this view.

I agree that this doesn’t mean we are safe for the next century, but I would argue that it does mean we are safe for at least a couple of decades.

And if we’re sure strong AI is not going to happen in 20 years, why work on friendly AI now, when we don’t understand the problem very well? It’s much better to wait until the field at least seems tractable, because then we’d be in a better position to actually mitigate AI risks.

It is not at all clear that such ideas are “valuable for all kinds of AI design”. I mean, these ideas are interesting, sure, but maybe it’s inherently difficult to supply an AI with a utility function that mentions the real world (after all, specifying real-world semantics in code seems really difficult). Or maybe strong AI is fundamentally non-Bayesian. Or maybe there are other black swans that we can’t even think about right now.

I’d actually put good money on these ideas being either useless or trivial by the time we have strong AI (even assuming no one conducts friendliness research until the last minute).

“It is currently very hard and we have no idea how to do it” is not the same as “It will remain hard for decades to come.” There’s no physical restrictions to building an AGI, so there’s no grounds to be confident that it will take many decades. We need to increase uncertainty in all directions.

It’s currently very hard, which surely gives us *some* lower bound on how much time it will take. I think 20 years is a really safe lower bound, but even if you disagree, surely you agree that 5 years is safe? I mean, we don’t even know how to start approaching the problem! The point is, perceived difficulty does give us a non-zero lower bound rather than leaving us in complete uncertainty.

Anyway, once again, if we have no idea how to build AIs, then we also have no idea how to make them friendly or unfriendly (because we don’t know how the AIs will model the world).

It was always completely obvious that removing Qaddafi (or Assad) was not going to help the people, either there or in neighboring countries. But don’t blame Obama! It was all really Sarkozy and Obama just went along so it wouldn’t look like France could act unilaterally.

I was reminded of this a couple of weeks ago when Lee Kwan Yew died and people linked to various interviews, including one where he was asked about the Arab Spring and replied that people were terribly confused lumping all Arabs together, and that it was important to distinguish tribal societies from the nations of Morocco, Tunisia, and Egypt.

Or Bernard Henri Lévy. (Well, that might be unfair, because Sarkozy did not have to listen to him, but I hate “public intellectuals” who rampage into fields in which they have no proven expertise or aptitude. Especially when all signs point to it being at least partly in the service of their egos.)

Sarkozy did not have to listen to BHL. And you do not have to listen to him as he writes history. I’m not saying that he’s wrong, but I’d rather hear it from a disinterested source. Which probably doesn’t exist.

Sarkozy just saw a potentially easy political victory, and acted accordingly. We are talking about a guy who invited:
1) Qaddafi for an official visit in France in December 2007,
2) Assad to the 14 of July celebrations and military defile in 2008.

“But don’t blame Obama! It was all really Sarkozy and Obama just went along so it wouldn’t look like France could act unilaterally.”

Can you explain why you would not hold Obama responsible for his role in this, even if it was due to apathy or whatever and not thinking it would actually work? Wasn’t he supposed to be a leader or something?

The cynical answer is that Obama’s decision had nothing or very little to do with the effect these actions would have on Libyans (who are not his problem) and more to do with the political and military issue of Libyan terrorists and/or pre-commitments made to the French government that were important to uphold for reasons having to do with those commitments particulars. The French cashed in their chips and said “pay me in the form of bombing Libya” so we did.

I was responding to the headline of the article Scott linked, “Obama’s Libya Debacle.”

Yes, of course we should hold Obama responsible for his role, but to do that we need to know what his role was. Yes, as “leader of the free world,” he might have been able to stop the intervention. But I doubt that would have been worth it. I don’t blame him for not going down that path. I do hold him responsible for trying to avoid it looking like France could act unilaterally, although I don’t know whether that was a good or bad thing.

>Obama just went along so it wouldn’t look like France could act unilaterally.

if obama had said no, the strikes would not have happened. NATO required US intelligence and logistical support to make them happen. Obama was either stupid enough to think that things would turn out well, or cynical enough to throw a country under a bus to avoid looking bad. either way, it’s the least excusable action this administration has done by a wide margin.

France is one of the few nations with all of the resources – including intelligence and logistics – necessary to engage in US-style warfighting. France didn’t need any help at all to begin conducting air strikes at Gadhafi’s forces, providing close air support to the rebels, etc. It is conceivable that France could have single-handedly brought down the regime without any US help. More likely, France had enough to start the process but not enough to finish it.

That still makes France look pretty good. If the United States doesn’t follow suit, the French are the plucky heroes who tried to save the Libyan people but couldn’t quite pull it off, and the United States is the uncaring superpower that didn’t even try. If the US does join in, the French still get credit for being there first, and the US (as we have seen) takes the blame for the catastrophe. A winning move for France however things turn out, and one they don’t need our permission for.

Once France starts bombing, the hope for a quick and relatively bloodless regime victory goes away. From there, any plan that doesn’t involve 30,000 NATO ground troops (which France would be hard-pressed to provide and Congress would never authorize), leaves Libya in ruins, a hundred thousand innocent dead, and the US looking bad. Obama got to chose between looking bad for not trying to help and looking bad for botching the job.

First, France simply doesn’t have 30k troops. France and the UK together might possibly be able to put together that many, but not right away.

Second, libya in ruins is precisely what we have now, so it seems to me that your worst case scenario of obama doing nothing is where we are now, while the upside potential (e.g. perhaps US convinces france not to go in) is large.

Then there’s the whole angle of moral culpability. After the gulf war, Bush gave a speech calling for the shia to revolt against saddam, and they did, and he let saddam kill them. Now, either of those actions, on its own, is defensible. It’s fine to give a speech that leads to nothing, and it’s fine to not help a revolt if he hadn’t given the speech, but together, it’s a terrible and incredibly immoral move that does damage to america’s image.

I’m fairly certain that there isn’t a suicide epidemic in the military once you control for age and sex. It’s just that males of the right age to have served a tour or two have a much higher suicide rate so it stands out

Controlling for sex is very important and makes the military look better. But controlling for age will make the military look worse because older people commit suicide more than younger people (but it’s not a big effect).

In the Vietnam Era Twin Registry (twins were both conscripted based on the ballot, but only one would be combat exposed), medium combat exposure was a significant predictor for subsequent suicidal ideation.

The artificial sweetener-obesity link is to a secondary source and I couldn’t see an easy way to get to the actual studies. Based on priors derived from my experiences tracing secondary sources back to their studies, particularly in this field, I am 75% confident that the headline conclusion is substantially false or misleading.

The human gut is an rich ecological niche – one that your body goes through a lot of effort to ensure is exploited by symbiotic rather than parasitic organisms (human breast milk contains more than 700 species of live bacteria).

If you cleaning it out fully you should expect other bacteria to fill the niche – and those bacteria weren’t selected for benevolence – they were selected for the ability to grow quickly in your digestive system in the absence of competition – a filter not likely to produce results you would like.

For almost all of human history, starvation and malnutrition have been far greater threats than obesity. One would have expected the human body to “go through a lot of effort” to select a gut microbiome that is as close to 100% efficient as possible at extracting calories from potential foodstuffs, while holding back as little as possible for the microbes themselves.

If that’s the evolutionary baseline, then it would seem that a person facing obesity would want, well, pretty much any gut microbiome but the one evolution gave them. Excluding the ones that produce actual toxins, but that’s not evolutionarily optimal for the microbes either. If there’s an alternative microbiome that’s coevolved enough to be not actively toxic to humanity but less “efficient” than the normal one, that’s not necessarily a bad thing.

If your claim is that, absent the current gut microbiome that gives us all possible calories whether we want them or not, the gut will be colonized by acutely toxic microbes, with no middle ground, I’m skeptical. There’s clearly an evolutionary niche for microbes that hang around in the human gut, consuming calories for their own purposes – including intermicrobial warfare against competitors – while being careful not to produce anything toxic to the host. Has that niche not been filled?

One would have expected the human body to “go through a lot of effort” to select a gut microbiome that is as close to 100% efficient as possible at extracting calories from potential foodstuffs, while holding back as little as possible for the microbes themselves.

That is not a good assumption.

EDIT:

At minimum that is a huge assumption that ignores the possibility of the body outsourcing specialty chemical production microbes – which strikes me as more likely than outsourcing energy extraction.

My thoughts exactly. I should be able to spend less on food by eating smaller portions, and still get the amount of calories I need.
And if starved people in [poor country] take some, it could help their meager amount of food go further, calorie-wise!

Something to point out: people under duress (for example, facing a million dollar lawsuit) sometimes make false confessions. I’m not suggesting necessarily that that is what happened with the librarians, but libel has a history of being used to suppress unwanted but true speech.

The defendants weren’t able to produce any witnesses, and (as I understand it) settled because they were probably going to lose the summary judgment. Absence of evidence isn’t always evidence of absence—but it sure is suggestive…

And when’s the last time defendants in a case like this actually admitted wrongdoing?

The sexual harassment case against the CEO of Stardock, Brad Wardell, ~2.5 years ago, see here for an example of the aftermath. It took a long time and lots of efforts for the retractions and the excuses to come.

Miseta didn’t admit wrongdoing. She apologized for bringing the suit, but didn’t say the suit was unjustified. She also apologized for any damage or destroyed materials, but specified that any such damage was accidental.

That’s very different from this case, where the defendants out-and-out admitted just about every element of libel. Both apologies specifically note that the statements were a) false b) damaging to Murphy’s reputation and c) intentionally posted. That’s essentially a confession to libel.

If I might play the skeptic here, I think it’s important to mention the fact they apologized presumably has more to do with how they expected the court case to go down than the truth or falsity of their claims.

Their apparent expectations about the case are fairly good indirect evidence that they weren’t telling the truth, but it’s still imaginable they were outlawyered or were offered a bribe or got threatened.

In order for the plaintiff to actually settle in a “I am about to win such a massive summay judgment that I’ll have you in debt slavery for ten generations” kind of case, what they get in the settlement has to be something the court can’t order.

In order for the defendants to make such a categorical admissions is guilt that is so incredibly detrimental to “the good fight” that they are trying to claim was their motivation (which kind of makes it morbidly comical)… (a) they had zero chance of winning in the face of ten generations of debt slavery and (b) the consideration the plaintiff was willing to make in light their admission of guilt had to be substantial.

Lawyers don’t give s shit about apologies, they are after their percentage of the settlement, so the apology was the plaintiff’s requirement. Even with the exoneration, he will almost certainly never be able to work again, so in order for him to forgo a substantial portion of the monetary award the apology has to go deeper than restored reputation. The kind of deep that is the very suggestive of real wrongs.

Or, I don’t know, maybe they were just soooo out lawyered that they had to apologize and still get ten generations of debt slavery.

I read this as the defendants having based their accusations on other peoples’ gossip and innuendo, and the gossipers then understandably refusing to come out of the woodwork and testify in a million-dollar lawsuit. Not quite the same thing as the defendants having sat around saying, “let’s just make something up because we want to Win A Victory For Social Justice!”. Still results in losing the lawsuit and, once it becomes clear that you are going to lose, cutting your losses by issuing an apology whose wording is approved by the plaintiff.

Not that malicious gossip-mongering is any great moral improvement over outright fabrication, and no words of apology are sufficient compensation for even the lesser of those offenses, IMO.

There’s a fine line between reckless disregard for the truth and fabrication, and when your lawyer proves unable to subpoena anyone who could defend a claim like “women attending lib conferences literally have instituted a buddy-type safety system to protect themselves [from Murphy],” reasonable people might be confused about whether you’ve crossed it.

You cling to that remote possibility. The million dollar lawsuit was only to exact the apology, the settlement was for Murphy’s legal fees and the apology, and for all the money raised to defend the jackasses to go to charity. Not quite the settlement of a man trying to “suppress true speech”.

This was a hit job, with the perps publicly gloating about how they were going to destroy Murphy’s career.

Around last September or October, several cases hit at around the same time. Rolling Stone’s “Jackie” story about UVA was initially the big one, but there was a case brought against CBC presenter and Moxy Fruvous frontman Jain Gomeshi, and this suit against Murphy. The core narrative that strung them all together used the slogan “Listen And Believe.”

The argument used in the debates around all three cases was that given the hostile environment women face, and given what were claimed to be astonishingly low rates of false reporting, accusations by women should be taken as true, evidence and investigation should not be required, and the accused should be punished/disciplined/ostracized without due process. It was argued that “innocent until proven guilty” was appropriate in a legal context, while these accusations were mostly raised in an administrative one, so summary punishment was appropriate. From extensive reading at the time, these were not isolated viewpoints. They were quite widespread.

Now Murphy’s case has resolved, and no evidence was available to back his accusers’ claims. Despite this, Arthur Chu and other prominent social justice figures continue to support #TeamHarpy, insinuating that the court battle proves nothing, and dragging the rumors against Murphy back out for another go-around. Comments and blogs have mourned the horrible harassment #TeamHarpy has experienced on the internet, as a consequence of publicly destroying a colleague’s career with unsupported allegations as part of a campaign to strike at the concept of due process.

All three cases were based heavily on rumors and speculation. The following was one of the big ones in Ghomeshi’s case:http://www.nothinginwinnipeg.com/2014/10/do-you-know-about-jian/
…And it was functionally identical to the accusations against Murphy. “Everyone knows”, “I’ve heard from multiple people”, etc. Ghomeshi may actually be guilty; several women have come forward to accuse him publicly.

Personally, I am done giving credence to rumors and speculation in this area.

I took the UVA allegations utterly at face value when they were published

Really? The way the characters in that story acted, and the things they said, didn’t look like things a real human being would do. They did, however, look suspiciously like the things activists say the cartoon villains they’re fighting think (i.e. the same sorts of clue that social justice warlock mentioned elsewhere in this page, about something else).

Really. I believed the whole thing, hook, line and sinker, and got *powerful* mad that such a fucked-up organization producing such fucked-up people existed. I still wasn’t in favor of Listen And Believe, but the facts as presented sounded like an excellent case for writing off the faculty as hopelessly corrupt and burning the place down with flamethrowers.

In my defense, I had only just hit hard contact with Social Justice, and was still at the point where I assumed some basic level of good faith on their part. Then it turned out the whole thing was lies, and I learned a Very Valuable Lesson.

No, as far as I’m aware there’s no solid evidence in the matter, nor would you expect any given the age and nature of most of the accusations. Remember, the factors that make false rape accusations so easy are the same ones that make pursuing legitimate cases very hard.

However, the way the accusations and response for the Cosby allegations played out incline me to think that one’s legit. Lots of unrelated accusers popped up after the story appeared, similar accusations had been made ten years ago, they had no trouble tracking down credible figures who’d worked with Cosby and didn’t think it was preposterously out-of-character, etc. (I also find it likely that Cosby’s counter-accusation of attempted extortion is true, but that’s neither here nor there.)

For what it’s worth with respect to my general gullibility, I was an immediate UVA skeptic. The story too closely resembled the universally bullshit accounts of Satanic ritual abuse for me to find it credible.

It wasn’t just rumors. It was fairly well-publicized a decade ago. He was facing a lawsuit. Only maybe three women publicly accused him, but there were about a dozen more who were going to testify. When he settled, they didn’t want to go public just for the sake of publicity. But when it came back in the news (first the February Newsweek interview with the plaintiff, then the comedian going viral in October) they finally came forward. Part of it was that Cosby responded to the interview by saying that the settlement vindicated him, and that pissed off the accusers who had seen the settlement vindicating them, indeed, had not come forward because of the settlement.

Thanks, I didn’t know that Hannibal Buress wrote for SNL and 30 Rock. He joined the shows after they did their Cosby jokes, but he talked to worked with the writer. Tina Fey is the obvious guess, since she performed the SNL sketch and was the writer of the 30 Rock episode.

So the Aaronson link is about 80% over my head, but I’m not sure how his requirement that a chunk of matter participate in the arrow of time in order to be conscious draws a firm line. Specifically, what normal processes *don’t* participate in the arrow of time? How does this draw a line between myself and my stapler, since precisely reversing my stapler’s actions is just as impossible as unthinking my last thought?

Well I can’t be sure! My point was more that I’m not sure of anything in the world that doesn’t undergo time’s entropic growth, and so I don’t see how exactly this definition helps define consciousness.

Thinking about it today however, Aaronson was only using this condition to weed out certain philosophical paradoxes, not to draw a consciousness line between observed chunks of matter. Peter’s necessary-but-not-sufficient condition makes sense of the article’s purpose to me, and presumably we’d use a different metric to determine degree of consciousness.

I think it’s to be read as a necessary but not sufficient condition – if Scott’s heavily-disclaimered wild speculations are correct we can exclude Boltzmann brains, odd quantum states etc. for not participating. Note that there seems to be a link between “not participating in the arrow of time” and “fundamentally hard to observe”…

“Reversible” is one of these horribly overloaded words anyway. Back when I was a Chemistry undergrad, I remember getting confused between reversible reactions and thermodynamic reversibility and irreversibility; one doesn’t imply the other.

If I have a piston full of cold air, and I heat it up quickly, so that the slidy bit moves fast and hits againsts the stops with an almighty clang, that’s an irreversible process. If I cool it down again with liquid nitrogen so the moving bit moves back and once again clangs against the stops, that’s another irreversible process. The fact that I’ve got the piston to where it originally came from is neither here nor there; it’s the environment that’s been irreversibly altered. This was terribly confusing to me for a while.

I came to Aaronson’s post fully expecting to disagree. But I only disagree with the reasons – the thesis of participating in the arrow of time looks correct. So, what are my reasons?

Consciousness, in my book, necessarily involves record-making, creating a physical record such as a memory, which is usually an irreversible process. If we consider memory + recall together, it’s always a thermodynamically irreversible process. And if the subject doesn’t recall any experiences, there isn’t much of a “subject” there.

In order to experience a sensation, the information from a sensory channel has to be broadcast brain-wide. It has to reach the “global workspace”, to use the cog sci / neuro term. (See here on fish pain for similar remarks on consciousness.) This already involves creating a physical record.

Free IUDs reduce teen pregnancy. Part of me wants to be snarky and say something like “sun reduces darkness”, but the last time they did one of these studies with condoms it turned out to be incredibly flawed, so I’ll wait until someone’s double-checked the methodology.

I buy it, but note that it will probably reduce rate of teen pregnancy among teens who are smart and able to plan ahead, compared to…other teens. But it’s precisely the former group we want getting teen pregnant.

A big point for IUDs is that once you’ve had the initial appointments to get the thing put in, you don’t have to do any thinking or planning for a very long time. If it’s being offered free, then whoever is paying for it may be setting up the appointments, paperwork, whatever, and walking the girl through whatever hoops there are.

Handing out free condoms means the recipients still have to use them, and we’ve had a comment thread on another post where people were complaining condoms made sex more unpleasant.

Free IUDs, once inserted – you don’t have to do anything else. You don’t have to remember to take pills on a regular schedule, you don’t have to use barrier methods which involve all kinds of fiddling about, you don’t have an implant which eventually runs out and you have to remember in time to get it replaced.

“The new frontier in the field that people are writing books about is the economics of pirates. And the title is perfect”

Peter Leeson’s The Invisible Hook is a lot of fun, and its analysis of the 18th c. piracy industry may well be correct. But the one thing that is wrong with it is the title, since he isn’t showing anything closely analogous to Adam Smith’s invisible hand.

The invisible hand is the hidden force that guides economic cooperation. According to Smith, people are self-interested; they’re interested in doing what’s best for them. However, often times, to do what’s best for them, people must also do what’s best for others. The reason for this is straightforward. Most of us can only serve our self-interests by cooperating with others….

Because of this, Smith observed, in seeking to satisfy our own interests, we’re led, “as if by an invisible hand,” to serve others’ interests, too….

Smiths’ invisible hand is as true for criminals as it is for anyone else. Although criminals direct their cooperation at someone else’s loss, if they desire to move beyond one man mug jobs, they must also cooperate with others to satisfy their self-interests. A one-man pirate “crew,” for example, wouldn’t have gotten far. To take the massive hauls they aimed at, pirates had to cooperate with many other sea dogs. The mystery is how such shifty “parcel of rogues” manage to pull this off. And the key to unlocking this mystery is the invisible hook – the piratical analog to Smith’s invisible hand that describes how pirate self-interest seeking led to cooperation among sea bandits, which this book explores.

We have replaced hereditary monarchies by hereditary seats in national parliaments, if you go by Irish political families 🙂

People routinely are succeeded in their seat on the county council/national parliament by siblings, children, nieces, nephews, and widows when leaving politics to either move to a higher level, or to retire (or by death).

It stuns me that we have such a hard time even considering whether the high price of medical care has something to do with the people and entities who set the prices and collect the money.

We have economic theories, and they all tell us one thing about the cost of a high demand good that cannot be easily substituted,where consumers have little ability to choose alternative providers of the same good, and consumers have little ability to gain the information or leverage necessary to negotiate. The price should be expected to be high, obviously.

And we can observe hospitals and doctors making huge amounts of money relative to other similar professionals.

And we can observe that insurance companies, which CAN choose alternative providers and DO have the information and leverage necessary to negotiate price, can negotiate incredible reductions in price- so much so that having insurance is important just to gain access to preferential pricing!

But our commitment to believing in meritocracy against all else prevents us from seeing this really, really, really simple thing.

It seems to me you’d have to start a clinic near a big and profitable one, and treat people at slightly less profit. The big one will want to buy your small clinic, and since what they get from buying yours (their monopoly back) is worth more than your actual valuation, they’ll agree to overpay. Take that money, and build two new small clinics – preferably near other big hospitals, because the same one can’t afford to pay as much the second time you pull this.

So they just implement a sealed envelope strategy and make their rules so that they always have to undercut small clinics near them. You now have no incentive to open a clinic as you’ll lose money and they thus never need to undercut anyone

>It seems to me you’d have to start a clinic near a big and profitable one, and treat people at slightly less profit.

That would be a fine strategy if we were talking about restaurants. But in most US states you can’t open a new hospital (or expand an old one) without a Certificate of Need hearing at which the existing players will testify that there’s no “need” for your hospital because there’s already more than enough beds in theirs to meet existing demand. The mere idea that adding new capacity would drive down prices is NOT considered an acceptable reason to open a hospital.

Get rid of those laws, let anybody open a hospital, and hospitals might actually compete on the basis of clear, low, transparent prices.

American doctor salaries? Stagnant at a significant multiple of the salary of doctors from other western countries? Tell me more.

More seriously, I do believe I said “doctors and hospitals,” and referred to the people who set the prices and collect the money. Is that you, perchance? Not under my understanding of the basic structure of hospitals…

If doctors’ salaries are high relative to other parts of the world, that might be a reason why U.S. health care costs are also high. But if they have been stagnant for the past few decades, they are not a reason why health care costs have gone steeply up.

In this case as others, it’s important to distinguish between levels and rates of change.

It might be worth mentioning the next event in the pizza-place affair: someone puts up a GoFundMe page saying “these people are being persecuted for taking a brave stand for their Christian faith” and bam, they get $840k in donations. I guess GoFundMe takes a cut, and I realise that I have no idea whether these things are taxable. The pizza people are probably taking away somewhat less than $840k. Still, it’s hard to believe they’re not coming out ahead overall.

Of course that doesn’t excuse whatever threats they got. (Though I can’t help suspecting that what actually made them close was more that they got a flood of bad reviews and fewer customers.)

And people who put up appeals for community disasters, children with cancer needing treatment, or “fund my transition surgery” probably get a lot of extra support over and above the costs needed.

Your point is? “Oh, these people didn’t suffer because they actually made money out of it!”

I think the woman was very naive (to be charitable: to be uncharitable, she was stupid) to give an honest answer to a news reporter. It’s the same type of “Have you stopped beating your wife?” question; who caters their wedding from a pizza parlour (although yes, probably somebody has done it)? The TV station wanted to get a hot news story out of the dispute, so they trawled around until they found somebody not clued-in enough to tell their reporter “Yes, I think the law is a good idea; no, we would serve gay customers here, but we wouldn’t cater a wedding because it’s against our beliefs”.

Bam! Top of the hour “Look at the face of naked bigotry in our community – details at nine” for the station, and the Usual Suspects jump on the bandwagon in the echo chamber of condemnation.

Having sufficiently mangled metaphors, I will now cease flogging this dead horse.

Your point is? “Oh, these people didn’t suffer because they actually made money out of it!”

No, oddly enough, that is not my point, and I would be slightly happier if you hadn’t taken something that neither quotes my words nor expresses my opinion and put it in quotation marks as if I’d said it.

(My point is simply: here is another part of the story, and it seems like an important part. Someone more cynical than I am might suggest that the O’Connors could have predicted that bit and were therefore not actually risking much by saying what they did; but my own guess is that they never thought of the possibility.)

Look at the face of naked bigotry in our community

Did you actually watch the segment? It looks pretty positive to me, and I don’t see the slightest hint of an accusation of bigotry. “Standing up for their religious beliefs”, “standing firm in their beliefs”, “small-town ideals”, etc. The absolute most negative thing it says is “you could say he’s set in his ways”, and FWIW it seems to be intended sympathetically. (If anyone can work out what Kevin O’Connor is saying after the sound cuts out in his bit, it might be interesting. Or it might not.)

On the spices, as someone who cooks a lot, I think the difference is that good European cooking is much more dependent on the quality of the ingredients and the cooking. Indian cooking is more error-tolerant.

I do know a few Indian restaurants where meals are much better than the average Indian restaurant and I presume that they are also fanatical about the quality of their ingredients, like the top French-style restaurants. In terms of the best-of-the-best meals I’ve had, I think Indian is at least as good as European. But most cooking in any cuisine is not at that level. So I wonder if the signalling going on by the European chefs and their patron was “I can get a meal tasting *this* good without needing to resort to all sorts of spices”, or, more simply: “Look ma, no hands!”

Why can’t it be an aesthetic difference? European cooking is not unique in emphasizing simple flavors with varying textures (as opposed to many flavors and simple textures). Chinese Dim Sum, a lot of Japanese food also fit this description.

I don’t get what Aaronson thinks the problem is. Humans value the continuity of their experience because experiences feel good from the inside. Good feelings are themselves processes, good feelings would not exist if they didn’t experience entropy. Entropy per se has little to do with it. If Aaronson is asking why theoretical good feelings within a lookup table don’t satisfy our values, the answer is because mere lookup tables wouldn’t ever achieve anything that’s evolutionarily beneficial so we never evolved values like that, we evolved values which discourage inactivity beyond certain thresholds and which encourage us to go out and do things rather than to just mathematically contemplate them.

The word consciousness is used by Aaronson in two different senses, I think. In one sense, consciousness is that stuff that makes us sentient humans care about each other and enjoy our own lives. In the other sense, consciousness is a specific type of computational process that theoretically can be well defined. I’m willing to concede that unmoving lookup tables are conscious in the second sense. But that doesn’t mean I actually want to be an unmoving lookup table, locked in place forever. Yet he comes close to implying that the former actually does mean the latter, which I think is unjustified. I want to avoid being a lookup table because I want to do enjoyable things, not because I have academic worries about the nature of my consciousness.

I believe you could think of it as somewhat related to the philosophical tangle surrounding brain uploading.

That is, if ‘you’ are really just an information pattern that is defined by the physical configurations of neural or computer components, then it makes sense that we could switch out one for the other, preserve the relevant bits of the pattern, and preserve the continuity of self while chasing immortality or what have you. But! If that’s the case, why does it matter that there be a brain, or a computer, at all? I mean, it’s not like the number ten was harmed by that time Frodo lost a finger. So if human consciousness is an algorithm, or a model, or a pattern of information, or something else under the domain of ‘abstraction’, then we’re already immortal, because abstractions don’t decay or die.

What Aaronson is wrestling with, I think, is how you can define consciousness as substrate-independent on the one hand, but acknowledge it as a physical process on the other.

I don’t see how “if that’s the case, why does it matter that there be a brain, or a computer, at all?” makes sense as a relevant question. It matters because that’s just the way human values are, not because human values tell us anything fundamentally important about the nature of computation or consciousness. Humans who were content to exist as mere abstractions would fail to reproduce, so they don’t exist. This is true despite whether or not humans are computationally equivalent to giant and complicated look up tables.

In a certain sense, sure, I’m already immortal. But I don’t care about whether or not the pattern that is me “exists” in some ethereal way. I want that pattern to change over time and to merge with and disrupt other patterns, I want to do things in life and not just to exist in it in an abstract sense.

According to this link, in 100 years, southern and eastern Europe had recovered to the population of the year 1000, while western Europe was well ahead of that number. The year 1000 was the warm peak! If it took 350 years for Europe to reach peak population starting in 1000, why do you expect it to go faster starting in 1450? Why faster when the climate is starting worse and declining. A pure climate-malthusian theory is more consistent with flat temperature than the warm-cold cycle.

“in 100 years, southern and eastern Europe had recovered to the population of the year 1000” I don’t get why you consider the population of year 1000 relevant, shouldn’t we be talking about recovering the population of the year 1300 or 1340. (1300 being shortly before the great famine of 1315, and 1340 being shortly before the plague).
I remember that according to some authority France managed to recover its 1300 population only in the 18th century, that’s where I get the 400 years (I probably overshot, but that doesn’t change the problem much).

Getting to the main issue, the reason I assume it should take faster for the population to grow in the period after 1300, compared to the period before 1300, is that as far as I can tell without being a specialist, a lot of the growth in the period before 1300 was due to actual economic improvement such as the opening of new land to agriculture, the improvement of agricultural technology, the creation of infrastructure, and generally all around high medieval economic progress (as opposed to being a simple malthusian phenomenon). All these things that had been created should not disappear overnight just because a plague made many people die, and I would expect people to breed like rabbit when the amount of developed farmland per capita suddenly doubles or goes 150%.

According to the Malthusian cycle theory of European history, when was the previous peak, the one before the 1300 peak?

Opening new land to agriculture was largely a matter of having enough force to defend it. It was not something that went faster the second time around.

France might be an exception. It was the most settled territory, so it was subject to malthusian growth limited by caloric surplus, while other countries growth was limited by ability to open new farmland.

The graph you link is from this blog post which follows it with the chart I linked without commenting on the utter incompatibility. It is copied from here, using data from this paper (table 6, paged 22). Historical demography is hard. Drawing conclusions from it doubly so.

I imagine that the difference between the famine of 1315 and the Plague was that the small decrease did not disrupt the social order. In particular, that it did not reduce the land under cultivation.

As far as I can tell the Mongols in the middle east physically destroyed the place, devastating irrigation works and causing desertification, so the comparison does not work. The plague should just have killed easily replaceable people.

Not entirely. As Douglas Knight points out, you have to be able to defend land to be able to farm it. Also, different technologies have different levels of population and social organization required to maintain or rebuild them.

Also: the Hundred Year’s War, the Reconquista, the Ottoman expansion (and fall of Constantinople), and more locally, the Wars of the Roses, the Burgundy Wars, various local conflicts within the Holy Roman Empire. While casualties in these wars would not be sufficient to actually lower the population or even substantially slow its growth, the devastation from the wars and seizure of crops during campaigning would be enough to slow the growth of the population, and even, locally, reverse it temporarily.

Also, it’s not quite accurate to look at the population of Europe in a time period that includes 1492. After that date, you should look at the population of Europeans, because they’d expanded their ecological range.

Good points, but there had been wars before – Barbarossa’s campaigns, the Guelph-Ghibelline wars, the Albigensian crusade, the Reconquista, the war of the Sicilian vespers, rebellions in England, warfare between England and France, and so on.
If war did intensify in the 14th century, and maybe it did, this is yet another thing that calls for an explanation.

Note that the two authors of this paper are neither historians nor climatologists — they’re economists.

There’s nothing wrong with writing papers outside your field, of course! Sometimes that can lead to exciting new insights. But when you go and look at the paper closely? Most of their cites are either to secondary sources or to their own papers.

Put another way: there’s a pretty strong academic consensus that the Little Ice Age was a thing. (Also, for the record, we know that there have been many other things *like* the Little Ice Age — in a historical context, it’s not unusual for climate to bob up and down a bit over a timescale of a few centuries. For an extreme example, google the Lesser Dryas.)

Now, it’s a fine thing to challenge an academic consensus. But the pile of evidence in favor of the Little Ice Age is large and comes from multiple disciplines. So one paper, by two guys, using one technique, is not a serious challenge. (Or at least, not yet.)

I’ll take a look when I have some time, but I note with disappointment that it doesn’t mention the gloriously shameless form of ordeal called corsned. It was basically trial by cheese; a small piece of bread and cheese was placed on an altar, the accused prayed devoutly that the bread and cheese would choke him if he were guilty; the accused ate the bread and cheese.

Amazingly, there’s no record of anyone ever failing this ordeal. Even more amazingly, this form of ordeal was reserved for the clergy.

Peter’s thesis is that trial by ordeal worked. The system was set up in ways that gave defendants considerable control over what happened to them and defendants believed in ordeals, so guilty defendants mostly avoided ordeals. The priests realized that, and so rigged the ordeals to usually acquit.

1) the problem with artificial sweeteners is mostly compensation (Oh, I had a diet soda, so I might as well splurge on dessert). In studies where the sodas are double blinded, you don’t see these effects [http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0078039 — I’ve seen other studies agree]. Just giving people aspartame pills [http://www.ncbi.nlm.nih.gov/pubmed/796476] did not change their weight gain either (tested against placebo).

2) The paper you linked to has two big weaknesses: (A) although the paper is titled “artificial sweeteners”, they only actually see their effect with saccharin. I suspect a paper that was more honestly titled would not have had the same impact however. I do think this is mild evidence against saccharin, but grouping things as “artificial sweeteners” is not helpful; those are marketing categories, not those that nature respects (B) the doses in the study are pretty large. A friend worked it out that it was about 10 to 20 sodas a day for weeks.

You should probably limit your consumption of sweeteners, but the worst is probably fructose, glucose is not great and saccharin might have issues. Artificial-vs-natural is not a useful distinction (e.g., the trend to move towards “natural” agave is bad one as agave has the highest concentration of any natural sweetener).

I was hoping that the original paper would hazard an explanation for why a diverse group of compounds that all happen to taste sweet would have the same basic effect on the gut flora, even if to less of an extent than saccharine. But I didn’t find one.

Left to my own devices, I can very vaguely imagine some sort of metabolic response to sweetness affecting the relative fitness of different bacteria, and then the altered composition of the bacterial populations having further vague obesogenic effects. With feedbacks and stuff.

But it seems more likely that saccharine maybe does something specific that the others don’t. And that we are unreasonably suspicious of anything that promises a sugar-free lunch.

My shortcut for dismissing concerns about sugar substitutes is to recall that over time the specific form of the fear has shadowed broader anxieties, first about cost-cutting industrial substitutions, then cancer, then diabetes, and now the microbiome and obesity. Which makes me suspect that our problem with them has less to do with their chemical details than with their illicit magical promise of something for nothing.

The metabolic response seems plausible as it also explains the apparent paradox that changing to non-calorie sweeteners does not make people consume less calories until you see the studies where the sweetener is blinded and that effect disappears. In most of the metabolic response models, it shouldn’t matter whether the person thinks they’re drinking diet or regular.

It might be harder to tell them apart if it doesn’t say “diet” on the can, but that doesn’t mean it’s hard. I don’t drink much soda, but I do occasionally use energy drinks, and I’ve been blindsided more than once by vile aspartame chemical bitterness. It’s gotten to the point where I check calorie counts on any unfamiliar energy drink, since they aren’t reliably labeled — most say “zero carb” if they’re artificially sweetened, but not all.

On the other hand, 23andMe says I have supertaster genes, so maybe that’s contributing to this (though I don’t seem to have most of the alleged supertaster phenotype).

Sadly I think the math on the bloggers don’t live in basements post is off. It doesn’t account for selection effects.

In addition to its math being off its facts are off too. It assumes that only single family detached house have basements. But in fact row houses have basements also and row houses make up most of the old housing stock in the densely populated cities of the northeast.

Yeah, to me the article was basically “there are a lot of people we can technically call ‘bloggers’ and not that many basements, therefore there can’t be all that many bloggers living in their parents’ basements”.

But for one, I don’t think that’s what most people using this pejorative really mean. It’s more shorthand for “people who blog a lot are often losers”. Maybe they don’t literally live in their parents’ basements, but they may live with their parents in a basement-less house, or maybe they live alone in a slovenly fashion. There isn’t any clear data on that. I suspect such people are still in a clear minority, but the article doesn’t really tell us anything about that.

I’ve done financial analysis on hospitals at work. While I’m sure there are some hospitals in New York or San Fran that get donations from billionaires and are doing fine, cash flow from hospitals is nearly always negative. That is, expenses exceed revenue and donations are required to make up the shortfall. The articles on hospitals cherry pick locations and revenue, but you won’t get rich owning a hospital. Especially in Florida, Ohio, Illinois, Texas or anywhere rural.

Maybe medical care is expensive because it is valuable and we have a lower concentration of doctors than Europe. Leaving out our bizarre medical economy…

They’ll lose about $70,000 to transaction fees — good deal, GoFundMe — leaving them to walk away with about $770,000. As a data point, the average income of a small restaurant owner / operator in the US is about $60,000 per year. (cite: payscale.com. And it’s a pretty tight band — 25th percentile is about $45k, 75th percentile is about $90k.)

Forced to close down: apparently they reopened yesterday, to little fanfare.

RE China – Chinese manufacturing workers are equivalent to robots from the POV of US economy, so I find those two explanations actually being the same – someone else doing the work much cheaper than you could ever be.

Leeson’s newer work goes into even more controversial and potentially interesting subjects. I took a course with him at GMU and in it we discussed his papers on human sacrifice and vermin trials. Both papers offer a potential theory for why communities engaged in seemingly irrational behavior. Here is the first few lines of his human sacrifice paper.

“This paper develops a theory of rational human sacrifice: the purchase and
ritual slaughter of innocent persons to appease divinities. I argue that human
sacrifice is a technology for protecting property rights. It improves property
protection by destroying part of sacrificing communities’ wealth, which
depresses the expected payoff of plundering them.”

Is the consensus really that any sufficiently complex computational mechanism will be conscious? I’ve always thought of consciousness as a type of computation that is sometimes useful to advance certain goals, not an intrinsic property of advanced computation. So to me saying that the USA must be conscious because it is sufficiently complex and self-regulating is like saying that all modern computers must run SimCity because they have enough RAM and hard drive space to do so. Am I wrong?

Huh? I didn’t get that from the paper or the discussion. I think that, for once, this was a discussion of consciousness where every single participant was smart enough to see the emptiness of “sufficiently complex”.

I don’t know if this is allowed (feel free to delete it if it is) but here’s an interesting story I found on Quora: Programming, Psychology PhD students, and Racism. tl;dr; a programmer gives up his pay (sorta) to have a battle of wits (sorta) with a racist former psychology PhD student (?).

Regarding Libya and military interventions in general, the critical question for avoiding a humanitarian catastrophe is, “Which government is going to provide the army of occupation that will shoot dead all the gangs of robbers, rapists, and other assorted troublemakers that come out of the woodwork over the next five years?” Expect to need at least one combat soldier per two hundred local residents, and they’ll need to be actual soldiers. Peacekeepers probably don’t count, nor nation-builders, this is still about killing people or convincingly threatening to kill people in large numbers.

If the answer is “I don’t know” or “That won’t be necessary”, you’re getting a humanitarian catastrophe. If the answer is “La Resistance will form a government”, you’re getting a humanitarian catastrophe. You need an army, and you need people with experience using an army to smash troublemakers without just killing everyone they don’t like.

It seems like the Vox article is a much more reasonable interpretation of that study than the National Post one. This is less a multiple possible interpretations thing and more one basically correct interpretation, and one basically incorrect one.

Why does European cooking have less spice than Indian and other cuisines? One theory: after the Age of Exploration, spice became cheap and everyone started countersignaling.

I was talking to a chef once, and he said that there had been an important shift in European cuisine that started in France: away from using sauces (and spices, probably, but he was only talking about spices) to compensate for inferior ingredients, to mask unwanted flavors, and so on, toward using sauces to complement the flavor of the thing they’re put on.

So it could be countersignaling, but it could also come from access to better ingredients — or from a desire to make the meat be more meatlike, as is mentioned in the article.

(Disclaimer: I have a reason to be skeptical of the countersignaling hypothesis. Draw a line on a map through Thailand and Syria — any cuisine south of that line mostly tastes like dirt to me.)

The West Hunter response to Poverty Shrinks Brains, that it’s likely a genetics issue, doesn’t really hold up given the studies referenced at the conclusion of the article:

“Still, the researchers are hopeful that the impacts could be reversible through interventions such as providing better child care and nutrition. Research in humans and in other animals suggests that is the case: a study in Mexico, for instance, showed that supplementing poor families’ income improved their children’s cognitive and language skills within 18 months”

I can certainly provide one anecdatum: I know at least one company and its employees which are definitely moving to Texas to escape California’s regulatory regime and taxes and not for air-conditioning. That’s why the space company my father runs is moving. I don’t know if it’s the main driver, but it is absolutely a thing that happens.

On spices, from the link: In medieval Europe, those who could afford to do so would generously season their stews with saffron, cinnamon, cloves and ginger. Sugar was ubiquitous in savory dishes. And haute European cuisine, until the mid-1600s, was defined by its use of complex, contrasting flavors.

One problem with this is confusing “European food” with “the food of the European ruling class”.

People in Europe who weren’t rich weren’t eating richly spiced food with imported delicacies until 1600 and then stopped; they never were. The reason people get the wrong idea about that is, I think, that most [all?] of the cookbooks that survived are … upper class.

India and China (etc.) always had so many spices because they grew there; the spiced foods of India and China are (relatively) commonplace, not elite-only.

Simple prevalence and price explains it pretty well, honestly.

(And of course nobody in the graph had capsicum until after the New World.

Also, sugar was not ubiquitous in Europe until the 18th century; it was an expensive luxury until the New World plantations took off, and even that took 200 years to really drop the price.

Perhaps she meant “sweeteners” – honey was relatively common, though naturally still expensive.)

Re: difference in diet for different social classes (and the use of honey), from an article on the practice fosterage in mediaeval Ireland:

Types of food were also distinguished according to rank. Porridge was given to all children, but the different flavourings reflected status: salt for the sons of the commoners, butter for the noble grades, and honey for royal children. The ingredients of the porridge itself differed, with a water-based porridge for the commoners, porridge made with new milk for the aristocratic grade, the same for the children of kings, but with extra wheat in it. In a hierarchical society, gradations permeated all aspects of life.

“The reason people get the wrong idea about that is, I think, that most [all?] of the cookbooks that survived are … upper class.”

Le Menagier de Paris, late 14th century, is from the upper middle class. At one point the author comments that a certain dish is too luxurious for them, suitable for a knight’s household. That’s the closest thing I know of to an exception prior to the 16th century.

On #teamharpy, I think I’m updating my default assumptions (as previously suggested) when presented with a dramatic example of a widespread problem to “the widespread problem probably exists, and this is probably not an example of it”. Cf. Michael Brown was not shot down in cold blood while surrendering, but the Ferguson police have been engaged in a campaign of racially-biased low-level harassment and extortion; Crystal Mangum was not raped by the Duke Lacrosse team, but there are a lot of men who get away with rape at college (as well as pretty much everywhere else). And the whole weird thing with Andrea Dworkin’s weirdly implausible allegations of being raped near the end of her life. And so on.

Anyone want to place estimates on Michael Shermer‘s guilt? Attempts to guess what said estimates would have been considering only the state of evidence circa 2009?

(Also, I’d update my estimates on the accuracy on “the whisper network”, which looks like a polite way to say “gossip” (see the comments; “I repeatedly heard people (both men and women) allege that he harrasses women at library conferences”, presented as useful evidence in and of itself).)

> the widespread problem probably exists, and this is probably not an example of it.

I don’t really understand this assumption.

The hypothesis “there is a real widespread problem, but once in a while a dramatic (rare) non-example always occurs in the same geographical area” is surely less likely than “there is a real widespread problem and once in a while a dramatic example occurs in the same geographical area”.

But the “examples” being called out, are really non-examples of the broader problem. Ferguson has a widespread problem with crooked cops and judges extorting money from poor black residents. Instead, we get a story about a cop shooting dead a poor black resident. That’s a different problem, and to a large extent a contradictory one – most crooked cops, and particularly the fiscally-motivated ones, are careful not to go around shooting people as the resulting paperwork and publicity get in the way of more lucrative activities.

Similarly at UVA, fraternities certainly have a widespread problem with what is normally known as “date rape”, but what Erdely and Rolling Stone give us is an allegation of forcible violent gang rape, which is a very different thing. And again, somewhat contradictory – the social environment that promotes date rape is very different than the one for violent gang rape, and the target sets are similarly different.

And, cross-threat, we had a real problem in Libya with a ruthless dictator who killed his political and military opponents, but we were instead fed stories of a literal babykilling genocidal maniac.

It should not have surprised anyone that the Ferguson case turned out to be an honest cop shooting a poor, black, criminal, and the UVA case turned out to be a fabrication. What is disappointing, is that the non-representative fabrications were so eagerly believed and were so effective in inciting outrage where the real problems were being generally ignored.

That is not how updating works, unless you’re saying you’d have updated against “Ferguson police have been engaged in a campaign of racially-biased low-level harassment and extortion” had Michael Brown in fact been shot down in cold blood while surrendering.

I’m not sure how much you know about bayes nets, but that toxoplasma post was making a point about alternate explanations. The way alternate explanations work (or multiples causes of a given event in a bayes net) is that the presence of an explanation that is known to be true (i.e. toxoplasma is evident), all other explanations are reduced towards their prior. What this means for updating is that if you know event A was caused toxoplasma, you shouldn’t be updating any other possible causes of event A (in either direction), or only updating very little; though you should increase your expectation of toxoplasma itself.

If you know the boy cried wolf falsely, your prior for a wolf showing up should be the same (though your prior for the boy lying to you should go up).

What this means is that, if you honestly believe #teamharpy to not be an example of sexism among librarians, but caused by social justice rage-mongering, you shouldn’t think sexism among librarians is more or less likely than before hearing of #teamharpy.

My priors are unaltered if I assume my priors are based on information obtained independently of A: TeamHarpy and B: whatever might have influenced TeamHarpy to engage in their dishonest behavior.

If the exposure of TeamHarpy’s dishonesty causes me to increase the assessed probability of “There exists a conspiracy or tendency to falsely accuse librarians of sexual harassment”, then I will modify a number of my priors on related issues.

Yes. It should be noted that you’re not updating the causes of event A directly, but through the update on the known cause B. So your prior on sexism among librarians is only affected to the degree that it depended on the honesty of #teamharpy (or the reference class you draw from them).

From the perspective of someone who considers anti-gay discrimination an evil, this is simple pragmatism. There are two ways we can deal with anti-gay discrimination. We can impose social, legal, and professional consequences upon people who do it, or we can try to reason with people who sincerely believe that a loving god exists but considers it an affront to humanity when two guys get married because a line in Leviticus says so. Which do you think will be more effective?

Will banning anti-gay discrimination and socially shitting all over homophobes stop them from holding homophobic beliefs? Of course not. But it will stop most them from acting upon and expressing those beliefs. And then the problem is solved.

The third option is simply treating bigoted podunk pizzeria owners with the contemptuous indifference they deserve. That approach would’ve had the virtue of denying them $840k in GoFundMe money, and also not made the American left look like a bunch of vindictive, overreacting loons.

Proportional response is not just a matter of moral restraint. It’s a pragmatic policy, too, in its own way. Sometimes you gotta keep your powder dry.

That’s not really an option, because the American Left’s contempt is such that it cannot allow mere indifference, but leads inevitably to overreaction.
I mean, just look at how simple freedom of association is painted as some great evil requiring mass mobilization at the governmental and volunteer level.

I’ve said before that most of the SJW Left is about undoing classical liberalism, and the OP is a pretty good example of that. Tolerance, and the idea of allowing infidels the freedom to be infidels is not something that is even contemplated. The only question is whether to use persuasion to convert them, or the government establishment of (our) religion.

That being the acceptable attitude with which to regard those who differ from your opinions, the sensible person will keep their mouth shut when asked for any opinion that differs from the prevailing Zeitgeist – whether that prevailing social attitude is “gays are perverts” or “gays are wonderful”.

It won’t stamp out bigotry, but there will be a wonderful harvest of hypocrisy.

Will banning pro-pervert activism and socially shitting all over homosexuals stop them from holding homosexual attitudes? Of course not. But it will stop most them from acting upon and expressing those beliefs. And then the problem is solved.

Yes, that solution has worked out so wonderfully, hasn’t it, when it came to discouraging gay rights activism. So of course it will stomp out homophobia!

Any case where measures worked to suppress something won’t readily come to mind as something that has been suppressed or even as something that people want to do at all. You probably wouldn’t have heard of it or thought much of it.

People disagree about many things. One approach to that is a system of voluntary transactions, in which a transaction (hiring, selling, …) happens if and only if both parties are in favor of it. Under that system, people who have views I strongly disapprove of will sometimes make life a little worse for innocent people, for instance by not hiring or selling their services to people whose race, sexuality, religion, or whatever they disapprove of. But they won’t make life much worse for those people unless the views in question are very widely held, widely enough so that almost nobody will sell to, hire, … the victims of the prejudice.

An alternative approach, embedded in current law and the moral intuitions of some people, is that a transaction happens either if both parties are in favor of it or if one party is in favor of it and the other is against it for reasons that the legal system disapproves of. With a perfectly wise and costlessly operating legal system, that might be an improvement. With a more realistic picture, it has a lot of pretty obvious down sides, including a sizable increase in social conflict and mutual dislike, since what are good reasons, correct moral and religious beliefs, is now being determined by the political/legal system, giving everyone an incentive to fight to make sure his version wins. And in the case where the first approach works poorly, this approach works much more poorly, since if almost everyone shares the prejudice it is likely to be incorporated into the legal rules about what transactions happen.

>poor Americans today are much more educated than they were a generation ago, but still poor

I hate to defend the notion that modern education is teaching people much that is valuable, but my hatred for vox is much stronger. Modern poor people are a lot richer than poor people were 30 years ago, this vox article is nonsense.

>Libya was once our best bet for an example of foreign military intervention going well for once, but in retrospect it went terribly and might have been a huge mistake.

I have been saying this from the beginning. what is happening there was entirely predictable, and was predicted at the time, the obama administration decided to throw a country under a bus rather than look bad. Much like Bush the elder’s response to the iraqi uprising he called for, this is the unforgivable sin of this administration.

“Modern poor people are a lot richer than poor people were 30 years ago”

Data? When I looked at poverty figures, some years back, I concluded that the poverty rate (definition held constant) had been declining from the end of WWII to sometime in the sixties, roughly constant since then, with variation up or down depending on economic conditions.

It it clear that that isn’t the case? Or is your “a lot richer” a description of increases in transfer via the welfare system, which I don’t think are included in the definition of income used to calculate the poverty rate?

modern people have more things, keep them in bigger houses, and the quality of those goods is immesurably superior. Whether the money is coming from transfers or no, people are consuming more than ever, and with the exception of housing, most of it isn’t paid for with debt.

How nepotistic are different industries in the US? One in fifty male governors has a son who’s also a governor; one in a hundred male football players have a son who’s also a football player.

Ok, here’s what we know – physical traits are inherited, personality traits and intelligence are inherited (they’re physical traits too on some level – the brain is a physical organ).

A highly extroverted, tall, at least moderately good looking man with a 120 or so IQ has lots of choices in careers. Actually winding up as governor of a state is a pretty unusual one that probably won’t even occur to most people but the son of a governor has a few things going for him in politics that most people don’t – contacts with consultants who can get him elected, name recognition with voters, knowledge of who can be trusted to make deals with in the political machine, inherited favors owed in the favor bank, etc. If you don’t have those factors you almost certainly go somewhere else when looking for a career.

A future football player, on the other hand, doesn’t benefit nearly as much from his non-genetic inheritance. What are the non-genetic factors there? You know coaches so you can be better trained, you’ve got a reputation that will help you make a team in HS (or college or the NFL) if you’re borderline, your father can teach you skills from when you’re very young so you develop them better (although I wouldn’t bet on the Antonio Cromarte type investing too much in his sons), etc. If you’re missing those and you’re still really good at football then you develop normally – on the team in HS, recruited to D1 school, drafted in the NFL. At every step of the way coaches are trying their best to find talent – which is a big difference with becoming governor – at every step there they’re looking for reasons to filter people out.

Pretty clear to me that the NFL isn’t the result of nepotism almost at all while governorships are the result of a process that isn’t exactly what we think of as nepotism (which has the implication of putting someone in a position where they’re not qualified because they’re related to the person appointing them) but is the result in paternal investment.

At every step of the way coaches are trying their best to find talent – which is a big difference with becoming governor – at every step there they’re looking for reasons to filter people out.

As a practitioner, I am a bit startled at your characterization of the way politics works.

First of all, choosing a state governor is a tiny little subset of the whole process. There are an estimated 500,000 elected positions in the United States, and that doesn’t include innumerable political party posts.

The development of football players that you describe involves millions of people; so does the development of politicians.

Second, the political world is just as starved for genuine talent as any other field, and effective candidate recruitment is a critical element of any political party’s success.

Except perhaps at the very highest levels, a shortage of good candidates is much more typical than a surplus.

In terms of identifying and promoting talented people, politics and sports are not as different as you portray them.

How many of those 500,000 positions are filled by the sons and daughters of people who are, or have been, holders of those positions? Just from what I know about local to me:

Nancy Pelosi’s father and brother were both Mayor of Baltimore at different times. Her brother-in-law was on the San Francisco Board of Supervisors, and is the uncle of former Mayor (and now-Lt.Governor) Gavin Newsom.

Uh, your point? I was responding to the strange contention that politics is all about “looking for reasons to filter people out”.

But is it really news that children of politicians also enter politics? Children of clergy often become clergy, too. The same for funeral directors, lawyers, and physicians. When your parents are successful, it is only natural to want to emulate them.

The world of politics is big, and there are lots of different roles available. There’s no rule of dynastic descent which states that the daughter of Baltimore’s mayor shall inherit a congressional seat on the other side of the continent.

American politics used to be far more inbred than it is today. Southern political leaders, in particular, were drawn from a small and tightly interrelated aristocratic class.

With a larger and more diverse country, we get a larger and more diverse set of politicians — and most of them are self-starters, not inheritors of family connections and reputations. In Congress, the number of “legacy” members has been declining for decades.

The spate of recent U.S. presidential candidates who come from political families (George Bush I, George Bush II, Jeb Bush, Al Gore, Mitt Romney, Hillary Clinton, Rand Paul) gave a misleading impression that “political dynasties” are returning. But those seven are greatly outnumbered by all the other non-dynastic presidential candidates during that period.

On the “Intentional Weight Loss and All Cause Mortality” point, 3 obvious points that stood out to me:

1. Given the vehemence with which doctors prescribe intentional weight loss as a cure-all, a 15% average reduction in all cause mortality actually seems really small. Always remember to translate percentage decreases into the actual percentages–if you phrase the prescription as “by dieting long term, you reduce the risk of your dying in the next 10 years from 10% to 8.5%,” that’s a lot of suffering for not a whole lot of gain.

2. The study compared mortality between intervention and non-intervention groups. It didn’t compare mortality by success. This means:

2a. The study results are consistent with a finding that intentional weight loss fails in its goal of weight loss, but possibly

2b. The effects of the interventions may have healthful effects regardless of their effect on weight loss. This matches what some other studies that compared all cause mortality by weight and “healthy habits” have found. In particular, one study used as a metric four healthy habits: (1) not smoking, (2) not excessively drinking, (3) eating 5 or more fruit or vegetable servings a day, and (4) exercising an average of 12 days a month or more for at least a half hour per day.

Then, looking at all cause mortality by number of healthy habits and BMI, the study found that (A) all cause mortality correlated more strongly with number of healthy habits than by weight. For every group, an obese person with n healthy habits had a lower all cause mortality than a normal weight person with n-1 healthy habits. Also interesting, among n >= 2 healthy habit populations, all cause mortality was very similar between weight groups, but for the n = 0 group, the obese group had a much higher mortality rate than the normal weight group (which was itself about twice as high as the n=4 obese group).

So if a weight loss intervention encourages people to stop drinking, eat more vegetables, and exercise more, it could have positive effects on mortality regardless of effect on weight loss.

That was unexpected: the Supreme Court bans regulatory boards made up of the profession they are regulating, in what looks like a big victory for, for example, entrepreneurs in the dental industry who don’t want the dental establishment to be the ones deciding whether they’re allowed to have a business. Cynical prediction: established players in the industry keep their regulatory boards, but pack them with non-professionals who just happen to agree with them about everything.

Short term solution: retired professionals.

Reading the beginning of the opinion: Good God is this going to be a mess when it hits California. If there’s a bad way to bring their practices into compliance, California will find a worse one.

Further in: “Critically, the municipality in Hallie exercised a wide range of governmental powers across different economic spheres, substantially reducing the risk that it would pursue private interests while regulating any single field.”

“it suggests the military’s high suicide rate may be related less to battle-related trauma and more to attracting suicidal sorts of people”

I don’t know about the US military, and most likely it’s not the case, but if we were talking about Russian military, I would easily link high suicide rates to horrible, dehumanizing, abusive, and sometimes maiming or even deadly hazing. If it was that bad in the US, we’d known by now, but it’s at least plausible that despite the effort of high commanders, the actual military culture promotes repressive way to deal with psychological issues, and calls “pussies” everyone who tries to solve them, which would inevitably lead to higher suicide rates.

However, the thing about the kind of people military can attract may also be true. Over 94% of recent recruits were under 25 years of age, and 82% were 21 or younger. That fits my mental model of young non-transhumanist adults and teenagers, who have vastly uncalibrated risk assessments, and nearly zero existential dread (also, all emotions overblown, including patriotism), which makes them take far larger risks, and join the military. That, however, would explain reckless endangerment, not suicides.

Huh–I think I’ve seen someone else’s take on that same data; the really interesting part is Figure 2, which seems to imply that, really, the only losers from globalization are the middle class in rich countries; everyone else from the very poor to the very rich does better.

I’m not buying that spices from India were so cheap in Europe, European elites stopped using them in order to distinguish themselves from proles, but somehow this didn’t happen in India. Spices were massively cheaper and more abundant in India than 17th century France; if anyone was counter-signaling, it would have been the Indian elites.

I’ve picked up The Most Good You Can Do and one thing I’ve noticed while reading it (and then going to 80 000 Hours) is that in all the writing about the best career paths, emergency services don’t come up. Now, either I haven’t read enough or the most obviously helpful career paths have been ignored?

Or is it that you have to be special in order to do those jobs? That they aren’t worth considering for the average Joe because it takes a Well-Above Average Joe to do them (this I don’t believe).

In Australia at least you earn decent money (at least as a firie or cop, dunno about paramedics) and could conceivably donate a good chunk of it to meaningful charities, while also having a variety-filled, honest-to-God-useful career, with friends for life and a huge sense of purpose, one that you could improve yourself with everyday.

So anyway, why aren’t these jobs mentioned?

Also, how do you factor in that if x amount of people are being kept alive, what kind of charity does technology need in order to keep up with the increased burden on food/energy/etc? I mean, in a zombie apocalypse you might not want to save those 10 people clamouring at the door to your stronghold because you know the whole thing will collapse with any more pressure.

Even without a shortage, they’re presumably picking the candidates they think are best for the job. So, at least insofar as their selection criteria work, you’re saving marginal lives over the guy that would have otherwise taken it.

So really it comes solely down to money ie how much can you earn and then pass on? You can’t really use the argument that they choose the best, because that can be used for any career – only those who are the best will have a decent impact.

I mean, the vast majority of these roles are not particularly helpful (selling toothbrushes or writing TV gossip), and working toward the ones that are in some way influential is competitive/not guaranteed.

While if you work in emergency services you tick all these boxes:

Impact – will this role make a difference?

As you say, maybe its marginal, but does that really matter if ‘anyone would make a difference’? Maybe if the thing that matters most is money, then if your job provides personal satisfaction and face-to-face charity you get the best of both worlds? In Emergency services you are almost guaranteed to make a daily difference. You could even take up a teaching role and influence children on safety, etc.

Career capital – will this step help me to develop useful skills, connections and credentials?

These are often long term careers, and you can influence how they operate eventually, even on a micro level. You will get better every day as you learn, and I imagine you will always learn (if you so choose).

Exploration value – will this step help me learn about the options available to me in the future?

Exploration so far as that particular field, and obviously you will learn vital skills like teamwork, assertiveness, etc.

Personal fit – will I be good at this job?

Depends

Job satisfaction – will I enjoy this job, and will it fit in with the rest of my life?

Depends

It’s just perplexing that the only sort of related job mentioned are doctors (who earn a lot) and nurses. Basically as far as I can tell the movement is ‘your job is only helpful if you can make a lot of money and give it away’. It seems like soft-neoliberal creep, pushing people to be hedge fund managers or ‘entrepreneurs’ (because the world totally needs an endless supply of people who think they have an amazing idea to offer the world).

That Aaronson post was long, but I’m glad I read it. Now I not only understand what Tegmark is talking about wrt numbers being conscious, I think it is obviously true that they are if you assume consciousness is a purely mechanistic process.

Well no, it’s obviously true if you assume consciousness is a purely mathematical process – note that computer science and computation are subfields of mathematics. On the other hand, if conscious processes are parts of the concrete material world, which is what I think, then that’s a whole different kettle of fish.

Meta

Subscribe via Email

Email Address

Metaculus is a platform for generating crowd-sourced predictions about the future, especially science and technology. If you're interested in testing yourself and contributing to their project, check out their questions page

Beeminder's an evidence-based willpower augmention tool that collects quantifiable data about your life, then helps you organize it into commitment mechanisms so you can keep resolutions. They've also got a blog about what they're doing here

Nectome is building the first brain preservation technique to verifiably preserve your memories for the future.

Altruisto is a browser extension so that when you shop online, a portion of the money you pay goes to effective charities (no extra cost to you). Just install an extension and when you buy something, people in poverty will get medicines, bed nets, or financial aid.

Giving What We Can is a charitable movement promoting giving some of your money to the developing world or other worthy causes. If you're interested in this, consider taking their Pledge as a formal and public declaration of intent.

MealSquares is a "nutritionally complete" food that contains a balanced diet worth of nutrients in a few tasty easily measurable units. Think Soylent, except zero preparation, made with natural ingredients, and looks/tastes a lot like an ordinary scone.

Triplebyte is building an objective and empirically validated software engineering recruitment process. We don’t look at resumes, just at whether you can code. We’ve had great success helping SSC readers get jobs in the past. We invite you to test your skills and try our process!

80,000 Hours researches different problems and professions to help you figure out how to do as much good as possible. Their free career guide show you how to choose a career that's fulfilling and maximises your contribution to solving the world's most pressing problems.

Dr. Laura Baur is a psychiatrist with interests in literature review, reproductive psychiatry, and relational psychotherapy; see her website for more. Note that due to conflict of interest she doesn't treat people in the NYC rationalist social scene.

AISafety.com hosts a Skype reading group Wednesdays at 19:45 UTC, reading new and old articles on different aspects of AI Safety. We start with a presentation of a summary of the article, and then discuss in a friendly atmosphere.

James Koppel Coaching teaches software engineers how to spend less time debugging and write robust future-proof code. We’ve helped SSC readers be more confident in design decisions and articulate in code reviews. Advanced Software Design courses offered live and online.

Jane Street is a quantitative trading firm with a focus on technology and collaborative problem solving. We're always hiring talented programmers, traders, and researchers and have internships and fulltime positions in New York, London, and Hong Kong. No background in finance required.

Collin F. of Instacart is looking for software engineers to work there.