Aidan O'Gara

There’s prob­a­bly peo­ple who can an­swer bet­ter, but my crack at it: (from most to least im­por­tant)

1. If peo­ple who care about AI safety also hap­pen to be the best at mak­ing AI, then they’ll try to al­ign the AI they make. (This is already turn­ing out to be a pretty suc­cess­ful strat­egy: OpenAI is an in­dus­try leader that cares a lot about risks.)

2. If some­body figures out how to al­ign AI, other peo­ple can use their meth­ods. They’d prob­a­bly want to, if they buy that mis­al­igned AI is dan­ger­ous to them, but this could fail if al­igned meth­ods are less pow­er­ful or more difficult than not-nec­es­sar­ily-al­igned meth­ods.

1. I think rat­ing can­di­dates on a few niche EA is­sues is more likely to gain trac­tion than try­ing to for­mal­ize the en­tire vot­ing pro­cess. If you in­vest time figur­ing which can­di­dates are likely to pro­mote good an­i­mal welfare and for­eign aid poli­cies, ev­ery EA has good rea­son to listen you. But the weight you place on e.g. a can­di­date’s health has noth­ing to do with the fact that you’re an EA; they’d be just as good listen­ing to any other trusted pun­dit. I’m not sure if pop­u­lar­ity is re­ally your goal, but I think peo­ple would be pri­mar­ily in­ter­ested in the EA side of this.

2. It might be a good idea to stick to is­sues where any EA would agree: an­i­mal welfare, for­eign aid. On other top­ics (mil­i­tary in­ter­ven­tion, health­care, ed­u­ca­tion), val­ues are of­ten not the rea­son peo­ple dis­agree—they dis­agree for em­piri­cal rea­sons. If you stick to some­thing where it’s mostly a val­ues ques­tion, peo­ple might trust your judge­ments more.

We want to jump­start high-re­ward ideas—moon­shots in many cases—that ad­vance pros­per­ity, op­por­tu­nity, liberty, and well-be­ing. We wel­come the un­usual and the un­ortho­dox.

Pro­jects will ei­ther be fel­low­ships or grants: fel­low­ships in­volve time in res­i­dence at the Mer­ca­tus Cen­ter in North­ern Virginia; grants are one-time or slightly stag­gered pay­ments to sup­port a pro­ject.

Think of the goal of Emer­gent Ven­tures as sup­port­ing new ideas and pro­jects that are too difficult, too hard to mea­sure, too un­usual, too for­eign, too small, or…too some­thing to make their way through the usual foun­da­tion and philan­thropic pro­cess.

Here’s the first co­hort of grant re­cip­i­ents. I think your pro­ject would fit what they’re look­ing for, and it’s a pretty low cost to ap­ply.

Agreed on both, an ar­ti­cle along the lines of “The world’s biggest pork pro­ducer just broke their an­i­mal welfare com­mit­ment” seems very valuable and pos­si­bly effec­tive as sham­ing, while “Cor­po­rate an­i­mal welfare cam­paign­ing of­ten fails to de­liver” would definitely be coun­ter­pro­duc­tive.

I think Vox’s Fu­ture Perfect could be a good plat­form for this—ei­ther one of you writ­ing a guest ar­ti­cle, or giv­ing Vox the in­for­ma­tion and let­ting them write. It’s an in­ter­est­ing news story to cover these bro­ken com­mit­ments, Vox’s read­er­ship already is fairly in­ter­ested in an­i­mal rights, and they could build it into an on­go­ing se­ries of ar­ti­cles track­ing progress. Maybe con­sider reach­ing out di­rectly to Kel­sey Piper/​Dy­lan Matthews/​Vox?

I think I’d challenge this goal. If we’re choos­ing be­tween try­ing to im­prove Vox vs try­ing to dis­credit Vox, I think EA goals are served bet­ter by the former.

1. Vox seems at least some­what open to change: Matthews and Ezra seem gen­uinely pretty EA, they went out on a limb to hire Piper, and they’ve sac­ri­ficed some read­er­ship to main­tain EA fidelity. Even if they place less-than-ideal pri­or­ity on EA goals vs. pro­gres­sivsim, profit, etc., they still clearly place some weight on pure EA.

2. We’re un­likely to con­vince Fu­ture Perfect’s read­ers that Fu­ture Perfect is bad/​wrong and we in EA are right. We can con­vince core EAs to dis­credit Vox, but that’s un­nec­es­sary—if you read the EA Fo­rum, your pri­mary source of EA info is not Vox.

Bot­tom line: non-EAs will con­tinue to read Fu­ture Perfect no mat­ter what. So let’s make Fu­ture Perfect more EA, not less.

Agreed. If you ac­cept the premise that EA should en­ter pop­u­lar dis­course, most gen­er­ally in­formed peo­ple should be aware of it, etc., then I think you should like Vox. But if you think EA should be a small elite aca­demic group, not a mass move­ment, that’s an­other dis­cus­sion en­tirely, and maybe you shouldn’t like Vox.

3. I have no per­sonal or in­side info on Fu­ture Perfect, Vox, Dy­lan Matthews, Ezra Klein, etc. But it seems like they’ve got a fair bit of re­spect for the EA move­ment—they ac­tu­ally care about im­pact, and they’re not try­ing to dis­credit or over­take more tra­di­tional EA figure­heads like MacAskill and Singer.

There­fore I think we should be very re­spect­ful to­wards Vox, and treat them like in­group mem­bers. We have great norms in the EA blo­go­sphere about epistemic mod­esty, avoid­ing ad hominem at­tacks, view­ing op­po­si­tion char­i­ta­bly, etc. that al­low us to have much more pro­duc­tive dis­cus­sions. I think we can ex­tend that re­la­tion­ship to Vox.

Us­ing this piece as an ex­am­ple, if you were crit­i­ciz­ing Rob Wiblin’s pod­cast­ing in­stead of Vox’s writ­ing, I think peo­ple might ask you to be more char­i­ta­ble. We’re not anti-crit­i­cism—We’re ab­solutely com­mit­ted to truth and hon­esty, which means seek­ing good crit­i­cism—but we also have well-jus­tified trust in the com­mu­nity. We share a com­mon goal, and that makes it re­ally easy to co­op­er­ate.

Let’s trust Vox like that. It’ll make our co­op­er­a­tion more effec­tive, we can help each other achieve our com­mon goal, and, if nec­es­sary, we can always take back our trust later.

2. Just throw­ing it out there: Should EA em­brace be­ing apoli­ti­cal? As in, pos­si­ble offi­cial core virtue of the EA move­ment proper: Effec­tive Altru­ism doesn’t take sides on con­tro­ver­sial poli­ti­cal is­sues, though of course in­di­vi­d­ual EAs are free to.

Robin Han­son’s “pul­ling the rope side­ways” anal­ogy has always struck me: In the great so­ciety tug-of-war de­bates on abor­tion, im­mi­gra­tion, and taxes, it’s rarely effec­tive to pick a side and pull. First, you’re one of many, fac­ing plenty of op­po­si­tion, mak­ing your goal difficult to ac­com­plish. But sec­ond, if half the coun­try thinks your goal is bad, it very well might be. On the other hand, push­ing side­ways is easy: no­body’s go­ing to fili­buster to pre­vent you from hand­ing out malaria nets—ev­ery­body thinks it’s a good idea.

(This doesn’t mean not in­volv­ing your­self in poli­tics. 80k writes on im­prov­ing poli­ti­cal de­ci­sion mak­ing or be­com­ing a con­gres­sional staffer—they’re both non­par­ti­san ways to do good in poli­tics.)

If EA were offi­cially apoli­ti­cal like this, we would benefit by Han­son’s logic: we can more eas­ily achieve our goals with­out en­e­mies, and we’re more likely to be right. But we’d could also gain cred­i­bil­ity and in­fluence in the long run by re­fus­ing to en­ter the poli­ti­cal fray.

I think part of EA’s suc­cess is be­cause it’s an iden­tity la­bel, al­most a third party, an in­group for peo­ple who dis­like the Red/​Blue iden­tity di­vide. I’d say most EAs (and cer­tainly the EAs that do the most good) iden­tify much more strongly with EA than with any poli­ti­cal ide­ol­ogy. That keeps us more ded­i­cated to the in­group.

But I could imag­ine an EA failure mode where, a decade from now, Vox is the most pop­u­lar “EA” plat­form and the av­er­age EA is liberal first, effec­tive al­tru­ist sec­ond. This hap­pens if EA be­comes syn­ony­mous with other, more pow­er­ful iden­tity la­bels—kinda how an­i­mal rights and en­vi­ron­men­tal­ism could be their own iden­tities, but they’ve mostly been ab­sorbed into the poli­ti­cal left.

If apoli­ti­cal were an offi­cial EA virtue, we could eas­ily di­s­own Ger­man Lopez on mar­ijuana or Ka­mala Har­ris and crim­i­nal jus­tice—im­prov­ing epistemic stan­dards and avoid­ing mak­ing en­e­mies at the same time. Should we adopt it?

Really valuable post, par­tic­u­larly be­cause EA should be pay­ing more at­ten­tion to Fu­ture Perfect—it’s some of EA’s biggest main­stream ex­po­sure. Some thoughts in differ­ent threads:

1. Writ­ing for a gen­eral au­di­ence is re­ally hard, and I don’t think we can ex­pect Vox to main­tain the fidelity stan­dards EA is used to. It has to be en­ter­tain­ing, ev­ery ar­ti­cle has to be ac­cessible to new read­ers (mean­ing you can’t build up reader ex­peca­tions over time, like a se­quence of blog posts or book would), and Vox has to write for the au­di­ence they have rather than wait for the au­di­ence we’d like.

In that light, look at, say, the baby Hitler ar­ti­cle. It has to be con­nected to the av­er­age Vox reader’s ex­ist­ing in­ter­ests, hence the Ben Shapiro in­tro. It has to be en­ter­tain­ing, so Matthew’s di­gresses onto time travel and Ma­trix. Then it has to provide valuable in­for­ma­tion con­tent: an in­tro to moral clue­less­ness and ex­pected value.

It’s pretty tough for one ar­ti­cle to do all that, AND se­ri­ously cri­tique Great Man his­tory, AND ex­plain the his­tory of the Nazi Party. To me, drop­ping those isn’t shoddy jour­nal­ism, it’s valuable in­sight into how to en­gage your read­ers, not the ideal reader.

Bot­tom line: Peo­ple who took the 2018 EA Sur­vey are twice more likely than the av­er­age Amer­i­can to hold a bach­e­lor’s de­gree, and 7x more likely to hold a Ph.D. That’s why Robin Han­son and GiveWell have been great read­ing re­sources so far. But if we ac­tu­ally want EA to go main­stream, we can’t rely on econ­blog­gers and think-tanks to reach most peo­ple. We need eas­ier ex­pla­na­tions, and I think Vox pro­vides that well.

...

(P.S. Small mat­ter, Matthews does not say that it’s “to­tally im­pos­si­ble” to act in the face of clue­less­ness, un­like what you im­plied—he says the op­po­site. And then: “If we know the near-term effects of foiling a nu­clear ter­ror­ism plot are that mil­lions of peo­ple don’t die, and don’t know what the long-term effects will be, that’s still a good rea­son to foil the plot.” That’s a great in­for­mal ex­pla­na­tion. Edit to cor­rect that?)

Fan­tas­tic, I com­pletely agree, so I don’t think we have any sub­stan­tive dis­agree­ment.

I guess my only re­main­ing ques­tion would then be: should your AI pre­dic­tions ever in­fluence your in­vest­ing vs donat­ing be­hav­ior? I’d say ab­solutely not, be­cause you should have in­cred­ibly high pri­ors on not beat­ing the mar­ket. If your AI pre­dic­tions im­ply that the mar­ket is wrong, that’s just a mark against your AI pre­dic­tions.

You seem in­clined to agree: The only rele­vant fac­tor for some­one con­sid­er­ing dona­tion vs in­vest­ment is ex­pected fu­ture re­turns. You agree that we shouldn’t ex­pect AI com­pa­nies to gen­er­ate higher-than-av­er­age re­turns in the long run. There­fore, your choice to in­vest or donate should be com­pletely in­de­pen­dent of your AI be­liefs, be­cause no mat­ter your AI pre­dic­tions, you don’t ex­pect AI com­pa­nies to have higher-than-av­er­age fu­ture re­turns.

I think the back­ground as­sump­tions are prob­a­bly do­ing a lot of work here. You’d have to go re­ally far into the weeds of AI fore­cast­ing to get a good sense of what fac­tors push which di­rec­tions, but I can come up with a mil­lion pos­si­ble con­sid­er­a­tions.

It’s hard to pre­dict when AI will hap­pen, it’s wor­lds harder to trans­late that into pre­sent day stock-pick­ing ad­vice. If you’ve got a world class un­der­stand­ing of the is­sues and spend a lot of time on it, then you might rea­son­ably be­lieve you can out­pre­dict the mar­ket. But beat­ing the mar­ket is the only way to gen­er­ate higher than av­er­age re­turns in the long run.

The im­plicit ar­gu­ment here seems to be that, even if you think typ­i­cal in­vest­ment re­turns are too low to jus­tify sav­ing over donat­ing, you should still con­sider in­vest­ing in AI be­cause it has higher growth po­ten­tial.

I to­tally might be mi­s­un­der­stand­ing your point, but here’s the con­tra­dic­tion as I see it. If you be­lieve (A) the S&P500 doesn’t give high enough re­turns to jus­tify in­vest­ing in­stead of dona­tions, and (B) AI re­search com­pa­nies are not cur­rently un­der­val­ued (i.e., they have roughly the same net ex­pected fu­ture re­turns as any other com­pany), then you can­not be­lieve that (C) AI stock is a bet­ter in­vest­ment op­por­tu­nity than any other.

I com­pletely agree that many slow-take­off sce­nar­ios would make tech stocks sky­rocket. But un­less you’re hop­ing to pre­dict the fu­ture of AI bet­ter than the mar­ket, I’d say the ex­pected value of AI is already re­flected in tech stock prices.

To in­vest in AI com­pa­nies but not the S&P500 for al­tru­is­tic rea­sons, I think you have to be­lieve AI com­pa­nies are cur­rently un­der­val­ued.

I like the gen­eral idea that AI timelines mat­ter for all al­tru­ists, but I re­ally don’t think it’s a good idea to try to “beat the mar­ket” like this. The cur­rent price of these com­pa­nies is already de­ter­mined by cut­throat com­pe­ti­tion be­tween hy­per-in­formed in­vestors. If War­ren Buffett or Gold­man Sachs thinks the mar­ket is un­der­valu­ing these AI com­pa­nies, then they’ll spend billions bid­ding up the stock price un­til they’re no longer un­der­val­ued.

Think­ing that Google and Co are go­ing to out­perform the S&P500 over the next few decades might not sound like a su­per bold be­lief—but it should. It as­sumes that you’re ca­pa­ble of mak­ing bet­ter pre­dic­tions than the ag­gre­gate stock mar­ket. Don’t bet on beat­ing mar­kets.