[I]n the fu­ture we hope to en­courage new fund man­agers to cre­ate new funds with differ­ent fo­cus ar­eas than the cur­rent op­tions.

As our three-month trial draws to a close we’re now think­ing more se­ri­ously about adding new funds to EA Funds. How­ever, there are a num­ber of open ques­tions that would de­ter­mine how many funds we might add, which funds might be added, and how quickly we’d be able to add new funds. I out­line the rele­vant open ques­tions as I see them be­low.

CEA plans to dis­cuss adding new funds dur­ing our team re­treat af­ter EA Global: Bos­ton. The goal of this post is to get feed­back on these ques­tion from the com­mu­nity to help in­form that dis­cus­sion. Please provide feed­back in the com­ments be­low. If you’re at­tend­ing EA Global: Bos­ton you can also grab me for a quick chat there.

Below I pre­sent each open ques­tion, try to ex­plain the full range of op­tions available, and then out­line some of the con­sid­er­a­tions that I think are rele­vant in ad­dress­ing the ques­tion. The goal is to re­main neu­tral on the an­swer while still pro­vid­ing rele­vant in­for­ma­tion. The in­clu­sion of an op­tion or a con­sid­er­a­tion does not nec­es­sar­ily im­ply en­dorse­ment of that op­tion or con­sid­er­a­tion by me or by oth­ers at CEA.

If you think there are open ques­tions to ad­dress that I have missed, please feel free to sug­gest them in the com­ments.

Open Questions

Ques­tion 1: Should we add new funds? If so, when?

The first ques­tion is whether we should add new funds at all and, if so, on what timeline should we add them? Part of this an­swer de­pends on how much money is mov­ing through EA Funds. For refer­ence, EA Funds has pro­cessed $775,000 so far with $31,000 in monthly re­cur­ring dona­tions. We ex­pect the pace of growth in the near fu­ture to be slower than it was in the first three months as the ini­tial buzz around EA Funds dies down.

Po­ten­tial options

Don’t add new funds

The first op­tion is that we shouldn’t add new funds at all. For ex­am­ple, we might want to tweak the ex­ist­ing funds by se­lect­ing new fund man­agers or by hav­ing mul­ti­ple peo­ple man­age cer­tain funds, but we might not want to ex­pand past a small num­ber of funds that rep­re­sent the most widely-sup­ported causes.

Add new funds, but later

We might want to add new funds, but only af­ter EA Funds has a longer track record or has reached cer­tain mile­stones. For ex­am­ple, we might only want to add funds af­ter a year, or once we’ve moved a cer­tain amount of money, or once we’ve reached a cer­tain amount of money in monthly re­cur­ring dona­tions.

Add new funds now

Fi­nally, we might opt to add new funds very soon.

Considerations

Fu­ture growth of EA Funds

Ad­ding new funds de­pends, at least in part, on how much money might be available to sup­port the funds which de­pend in turn on EA Fund’s fu­ture growth prospects.

This is hard to de­ter­mine, but here are some guesses. First, I don’t ex­pect us to raise as much money over the next three months as we did over the ini­tial three months. Much of the money we do raise will be driven by the $31,000 in monthly re­cur­ring dona­tions that have already been set. How­ever, it is un­likely that donors with re­cur­ring dona­tions will change their al­lo­ca­tion to in­clude new funds. This means that it may be rel­a­tively difficult to move sig­nifi­cant amounts of money through new funds in the short term.

On the other hand, the user base of EA Funds is still rel­a­tively small (around 665 unique donors), so there may be sig­nifi­cant low-hang­ing fruit in get­ting peo­ple already in­volved in EA to con­sider us­ing the plat­form. Ad­di­tion­ally, adding a fund that meets an as-yet un­met de­mand could cause ad­di­tional money to flow through the plat­form in a way that doesn’t can­ni­bal­ize ex­ist­ing funds.

View­point diversity

All of our cur­rent funds are run by GiveWell/​Open Phil staff mem­bers. As we’ve stated in the past, we aim to have 50% or less of the pro­gram officers work at GiveWell/​Open Phil. Ad­ding more funds seems like the most plau­si­ble way to achieve this goal.

Reputation

Ad­ding new funds that are sig­nifi­cantly worse than the ex­ist­ing op­tions might harm the rep­u­ta­tion of EA Funds, CEA, and EA in gen­eral. Con­versely, adding high-qual­ity funds in new ar­eas may im­prove the rep­u­ta­tion of EA by fur­ther show­cas­ing the abil­ity of the EA com­mu­nity to find in­ter­est­ing ways of im­prov­ing the world.

Ques­tion 2: What kinds of funds should we add?

Our ex­ist­ing funds each fo­cus on a sin­gle broad cause area that EAs have his­tor­i­cally sup­ported. The ex­ist­ing funds were de­signed to give fund man­agers rel­a­tively wide lat­i­tude to de­cide what use of funds is best while also mak­ing it clear to donors what the funds might donate to.

One ques­tion for the fu­ture is whether we should ex­pand EA Funds by adding new funds in new cause ar­eas or whether we should ex­pand by adding new funds built around themes other than cause ar­eas.

Po­ten­tial options

Below are some op­tions for the kind of funds we might add. Keep in mind that these op­tions are not mu­tu­ally ex­clu­sive, so we could in­clude mul­ti­ple plau­si­ble op­tions.

New funds in new cause areas

We could sim­ply add new funds in new cause ar­eas. Th­ese would op­er­ate similarly to the ex­ist­ing funds.

New funds in ex­ist­ing cause areas

We could add funds in ex­ist­ing cause ar­eas that have the same scope as the cur­rent funds. For ex­am­ple, we could add a sec­ond fund in global health and de­vel­op­ment which has the same scope as the fund man­aged by Elie, but which is man­aged by some­one else.

Fund man­ager’s discretion

We could add funds that give the fund man­ager wide lat­i­tude to recom­mend a grant to what­ever they think is best re­gard­less of cause area.

Differ­ent ap­proaches to ex­ist­ing causes

We could add funds that take some differ­ent ap­proach to ex­ist­ing cause ar­eas. For ex­am­ple, we could add a fund in global health and de­vel­op­ment that fo­cuses on high-risk, high-re­ward pro­jects (e.g. star­tups, fund­ing eval­u­a­tions rather than di­rect in­ter­ven­tions), or we could add a long-term fu­ture fund that fo­cused on ar­eas other than AI-Safety.

Funds based on par­tic­u­lar tactics

We could add funds which are fo­cused on par­tic­u­lar tac­tics in­stead of cause ar­eas. For ex­am­ple, we could add a fund which donates only to star­tups or which funds re­search pro­jects. Th­ese funds could op­er­ate across a va­ri­ety of cause ar­eas.

Funds based on nor­ma­tive disagreements

We could add funds which are based on spe­cific nor­ma­tive dis­agree­ments. For ex­am­ple, we could have a fund which fo­cuses pre­dom­i­nantly on im­prov­ing (and not nec­es­sar­ily sav­ing) lives or a fund which fo­cuses on re­duc­ing suffer­ing.

Considerations

The chicken-and-egg prob­lem for new causes

For a fund in a new cause area to suc­ceed it needs both money and high-qual­ity pro­jects to sup­port with that money. This pre­sents differ­ent prob­lems for EA Funds than those faced by large fun­ders with an en­dow­ment like Open Phil. In Open Phil’s case, since it already has the money, it can de­clare an in­ter­est in fund­ing some new area and then use the promise of po­ten­tial fund­ing to cause peo­ple to start new pro­jects. If no pro­jects show up, it can sim­ply redi­rect the money to other pro­jects.

How­ever, in EA Funds, the abil­ity of a fund to at­tract money is par­tially de­pen­dent on the ex­is­tence of promis­ing pro­jects to fund (since a fund with­out plau­si­ble grantees will have a hard time get­ting dona­tions). This means that EA Funds may find it difficult to cat­alyze ac­tivity in com­pletely novel ar­eas.

Clarity

It should be rel­a­tively easy for donors to figure out what they’re sup­port­ing if they donate to a fund. For donors will­ing to re­search, the fund page should be suffi­cient to help them un­der­stand each fund.

How­ever, not all donors will care­fully read the fund pages and many donors will choose what fund pages to re­view based on the name and per­haps a short de­scrip­tion of each fund. While we hope donors will look at the de­tails of each fund, re­al­is­ti­cally it may be the case that the name of each fund alone will have a dis­pro­por­tionate effect on whether peo­ple choose to sup­port it.

Fund names should satisfy two goals:

The name should make it clear what the fund is likely to sup­port.

The name should make it clear how the fund is differ­ent from the other available funds.

How­ever, some op­tions for adding new funds pre­sent greater clar­ity challenges than oth­ers. For ex­am­ple, funds in the same cause area as ex­ist­ing funds will pre­sent a par­tic­u­lar challenge in choos­ing names that make it easy to un­der­stand how the funds differ. Similarly, funds that op­er­ate at the fund man­ager’s dis­cre­tion will be difficult to name in a way that makes it clear what the fund is likely to sup­port.

Ex­pand­ing EA’s in­tel­lec­tual horizons

Ad­ding funds in ar­eas out­side of global health, an­i­mal welfare, long-term fu­ture, and EA com­mu­nity would help ex­pand the in­tel­lec­tual hori­zons of EAs and help us find new promis­ing cause ar­eas.

Ques­tion 3: How should we vet new funds?

Our cur­rent funds rep­re­sent prob­lem ar­eas that we think are es­pe­cially promis­ing, have wide com­mu­nity sup­port, and are run by fund man­agers that we think have strong knowl­edge and con­nec­tions in the fund area. We could at­tempt to en­sure that any new funds ad­here to similar stan­dards or we could sub­stan­tially open the plat­form up and al­low any­one (or nearly any­one) to cre­ate a fund of their own.

Below I try to out­line a con­tinuum of plau­si­ble op­tions for the de­gree to which we ought to vet new funds. I then out­line some con­sid­er­a­tions that are rele­vant for de­cid­ing where we ought to fall along this con­tinuum.

Po­ten­tial options

No vetting

On one ex­treme end of the con­tinuum, we could let any­one cre­ate a fund which they man­age how­ever they want and which any­one can donate to. To add slightly more qual­ity con­trols we could re­quire cer­tain kinds of re­port­ing and re­quire some stan­dard set of in­for­ma­tion for the fund page of each fund.

Demo­cratic vetting

We could let any­one cre­ate a fund, but only keep funds that re­ceive a cer­tain amount of sup­port from the com­mu­nity (e.g. dona­tions or “votes” of some kind). We could in­stead let any­one pro­pose a fund, but only ac­cept some small num­ber of funds as de­ter­mined by com­mu­nity sup­port (e.g. pledges to donate).

Plau­si­bil­ity vetting

We could let any­one pro­pose a fund, but then have CEA (or some set of trusted re­searchers) re­view the funds and re­ject any funds which we think are not plau­si­bly a good can­di­date.

The pre­cise defi­ni­tion of “plau­si­bil­ity” in this con­text is up for grabs, but the goal would be to re­ject only the funds and fund man­agers which seem like es­pe­cially poor op­tions. The pro­cess could use some method of demo­cratic vet­ting to fur­ther nar­row down the field from among the plau­si­ble op­tions.

“Rea­son­able-per­son” vetting

Us­ing the pro­cess de­scribed above, we could ap­ply a more strict “rea­son­able per­son” stan­dard. The goal would be to only ac­cept funds which a rea­son­able per­son might think are bet­ter than some bench­mark. For ex­am­ple, we could only al­low funds which a rea­son­able per­son might think are bet­ter than AMF or bet­ter than the ex­ist­ing funds. Any­one could pro­pose a fund and then this stan­dard would be ap­plied or propos­ing a fund could be an in­vite-only pro­cess.

“Bet­ter than” vetting

Fi­nally, we could only ac­cept funds that CEA (or some set of trusted re­searchers) think are bet­ter than the ex­ist­ing op­tions for some crite­rion of bet­ter­ness. This is differ­ent from the rea­son­able per­son stan­dard be­cause it re­quires that we think the fund is ac­tu­ally bet­ter than the ex­ist­ing op­tions, not that we could see how some­one might think that the fund is bet­ter.

Hy­brid options

We could also com­bine mul­ti­ple ap­proaches to form hy­brid op­tions. Some rough ideas for how we might do this are be­low:

Start closed and open up over time

We could start by vet­ting funds very close for the first few rounds of adding new funds and we could de­crease the vet­ting re­quire­ments over time.

Low vet­ting plus nudges

We could provide very lit­tle vet­ting for cre­at­ing a fund, but nudge users to­wards the funds that we think are most promis­ing. For ex­am­ple, the de­fault [al­lo­ca­tion page](https://​​app.effec­tivealtru­ism.org/​​dona­tions/​​new) could only in­clude highly promis­ing funds and less promis­ing op­tions could be made less im­me­di­ately ob­vi­ous.

Considerations

Below are some con­sid­er­a­tions that might fac­tor into the de­ci­sion of how closely to vet new funds. Th­ese are pre­sented in no par­tic­u­lar or­der.

In­clu­sion in EA Funds as a nudge

User be­hav­ior so far sug­gests that many peo­ple choose to split their dona­tion among sev­eral funds in­stead of donat­ing all of their money to a sin­gle fund. This sug­gests that donors see in­clu­sion in EA Funds as a sign of qual­ity and that a fund’s in­clu­sion nudges peo­ple to donate to causes they might not have given to oth­er­wise. This was also born out in some Skype con­ver­sa­tions we had with early users.

This in­creases the po­ten­tial for new funds to cause harm by at­tract­ing money that might have been bet­ter spent el­se­where.

Ad­minis­tra­tive costs

Each fund adds some small, but non­triv­ial ad­minis­tra­tive cost to CEA.

For each fund, CEA needs to com­mu­ni­cate with the fund man­ager reg­u­larly about the amount of money available, whether they have new grant recom­men­da­tions, and about post­ing up­dates to the web­site. We also in­cur ad­minis­tra­tive costs ev­ery time a grant is made as we need the trustees to ap­prove the grant and we need to work with the char­ity to get them the money. We could prob­a­bly de­velop sys­tems to de­crease ad­minis­tra­tive costs if the scale of the pro­ject re­quired this, but we likely wouldn’t be able to do this in the short term.

Low-qual­ity funds might make it harder to ac­quire (and re­tain) high-qual­ity fund man­agers as be­ing as­so­ci­ated with the pro­ject be­comes less pres­ti­gious.

Re­searcher recruitment

One source of value from EA Funds is that it might help in­cen­tivize tal­ented re­searchers to do high-qual­ity work on where peo­ple ought to donate. Lower bar­ri­ers to en­try in set­ting up a fund might in­crease the pipeline of re­searcher tal­ent that EA Funds helps cre­ate.

Fund­ing ex­ter­nally con­tro­ver­sial projects

One af­for­dance we’d like for EA Funds to have is fund­ing high im­pact, but ex­ter­nally con­tro­ver­sial pro­jects.

Plau­si­bly, the more funds we have, and the more EA Funds is an open plat­form, the less the ac­tions of a sin­gle fund will nega­tively af­fect the plat­form as a whole. So, we might have more af­for­dance to fund con­tro­ver­sial pro­jects by adding more funds.

New funds and ac­quiring new users

It seems plau­si­ble that more funds would make it eas­ier to at­tract more users for two rea­sons. First, when some­one sets up a fund they will likely reach out to their net­work to get peo­ple to donate which may help us ac­quire users. Se­cond, the more va­ri­ety we offer the more likely it is that donors find funds that they strongly res­onate with.

The mar­ket­place of ideas

Lower bar­ri­ers to en­try would pro­mote a more open and thriv­ing mar­ket­place of ideas about where peo­ple should donate.

Expertise

EA Funds was con­ceived as a way of mak­ing in­di­vi­d­ual’s dona­tion de­ci­sions eas­ier, by al­low­ing them to draw on the ex­per­tise of peo­ple or groups who have greater sub­ject-mat­ter ex­per­tise and are more up-to-date with the lat­est re­search on their Fund’s topic, cur­rent fund­ing op­por­tu­ni­ties in the space, and or­ga­ni­za­tional fund­ing con­straints. There is a trade­off be­tween cre­at­ing fewer new funds that are gen­uinely ex­pert-led, and a greater num­ber of funds where the av­er­age level of ex­per­tise is lower.

Conclusion

This post has at­tempted to de­scribe some of the open ques­tions on EA Funds and the rele­vant con­sid­er­a­tions as a way to so­licit feed­back and new ideas from the EA com­mu­nity. I look for­ward to a dis­cus­sion in the com­ments here and in per­son for any­one at EA Global: Bos­ton this week­end.

The next steps for this pro­cess are for me to re­view com­ments to this post and to dis­cuss the topic with the rest of the CEA team. After­ward, I plan to write a fol­low-up post that out­lines ei­ther the op­tion we se­lected and why or the op­tions we’re cur­rently de­cid­ing be­tween. If you have thoughts that you’d pre­fer not to share here, feel free to email me at kerry@effec­tivealtru­ism.org.

Please note that due to EA Global: Bos­ton, CEA staff might be slower to re­spond to com­ments than usual.

Just wanted to men­tion that I thought this was a re­ally good post. I think it did a good job of ask­ing for com­mu­nity in­put at a time where it’s po­ten­tially de­ci­sion rele­vant but where enough con­sid­er­a­tions are known that some plau­si­ble op­tions can be put forth.

I think it also did a good job of de­scribing lots of con­sid­er­a­tions with­out bi­as­ing the reader strongly in fa­vor/​against par­tic­u­lar ones.

The ‘life-im­prov­ing’ or ‘qual­ity of life’-type fund that tries to find the best way to in­crease the hap­piness of peo­ple whilst they are al­ive. My view on moral­ity leads me to think that is what mat­ters most. This is the area I do my re­search on too, so I’d be very en­thu­si­as­tic to help who­ever the fund man­ager was.

A sys­temic change fund. Part of this would be rep­u­ta­tional (i.e. no one could then com­plain EAs don’t take sys­temic change se­ri­ously) an­other part would be that I’d re­ally like to see what the fund man­ager would choose to give money too if it had to go to sys­temic change. I feel that would be a valuable learn­ing ex­pe­rience.

A ‘moon­shots’ fund that sup­ported high-risk, po­ten­tially high-re­ward pro­jects. For rea­sons similar to 2 I think this would be a re­ally use­ful way for us to learn.

My gen­eral thought is the more funds the bet­ter, pre­sum­ing you can find qual­ified enough peo­ple to run them. It has the pos­i­tive effect of demon­strat­ing EA’s ope­ness and di­ver­sity, which should mol­lify our crit­ics. As men­tioned, it pro­vides chances to learn stuff. And it strikes me as un­likely new funds would di­vert much money away from the cur­rent op­tions. Sup­pose we had an EA en­vi­ron­men­tal­ism fund. I as­sume peo­ple who would donate to that wouldn’t have been donat­ing to, say, the health fund already. They’d prob­a­bly be sup­port­ing green char­i­ties in­stead.

Now that you men­tion it, I think this would be a much more in­ter­est­ing way to di­vide up funds. I have ba­si­cally no idea whether AI safety or anti-fac­tory farm­ing in­ter­ven­tions are more im­por­tant; but given the choice be­tween a “safe, guaran­teed to help” fund and a “moon­shot” fund I would definitely donate to the lat­ter over the former. Di­vid­ing up by cause area does not ac­cu­rately sep­a­rate dona­tion tar­gets along the lines on which I am most con­fi­dent (not sure if that makes sense). I would much rather donate to a fund run by a per­son who shares my val­ues and be­liefs than a fund for a spe­cific cause area, be­cause I’m likely to change my mind about which cause area is best, and per­haps the fund man­ager will, too, and that’s okay.

I have ba­si­cally no idea whether AI safety or anti-fac­tory farm­ing in­ter­ven­tions are more im­por­tant; but given the choice be­tween a “safe, guaran­teed to help” fund and a “moon­shot” fund I would definitely donate to the lat­ter over the former. Di­vid­ing up by cause area does not ac­cu­rately sep­a­rate dona­tion tar­gets along the lines on which I am most con­fi­dent (not sure if that makes sense).

mostly agree, but you need a cou­ple more as­sump­tions to make that work.

poverty = per­son af­fect­ing view of pop­u­la­tion ethics or pure time dis­count­ing + be­lief poverty re­lief is the best way to in­crease well-be­ing (I’m not sure it is. See my old fo­rum post

Also, you could split poverty (things like Give Directly) from global health (AMF, SCI, etc.). You prob­a­bly need a per­son-af­fect­ing view or pure time dis­count­ing if you sup­port health over x-risk, un­less you’re just re­ally scep­ti­cal about x-risks.

an­i­mals = I think an­i­mals are only a pri­or­ity if you be­lieve in a im­per­sonal pop­u­la­tion ethic like to­tal­ism (max­imise hap­piness over his­tory of the uni­verse, hence cre­at­ing happy life is good), and you ei­ther do pure time dis­count­ing or you’re suffer­ing fo­cused (i.e. un­hap­piness counts more than hap­piness)

If you’re a straight­for­ward pre­sen­tist (a per­son-af­fect­ing pop­u­la­tion ethic on which only presently ex­ist­ing things count), which is what you might mean by ‘short term’. You prob­a­bly shouldn’t fo­cus on an­i­mals. Why? An­i­mal welfare re­forms don’t benefit the presently ex­ist­ing an­i­mals, but the next gen­er­a­tion of an­i­mals, who don’t count on pre­sen­tism as they don’t presently ex­ist.

Good point on the axes. I think we would, in prac­tice, get less than 16 funds for a cou­ple of rea­sons.

It’s hard to see how some funds would, in prac­tice, differ. For in­stance, is AI safety a moon­shot or a safe bet if we’re think­ing about the fu­ture?

The life-sav­ing vs life-im­prov­ing point only seems rele­vant if you’ve already signed up to a per­son-af­fect­ing view. Talk­ing about ‘sav­ing lives’ of peo­ple in the far fu­ture is a bit strange (al­though you could dis­t­in­guish be­tween a far fu­ture fund that tried to re­duce X-risk vs one that in­vested in ways to make fu­ture peo­ple hap­pier, such as ge­netic en­g­ineer­ing).

Hey Michael, great ideas. I’d like to see all of these as well. My con­cern would just be whether there are char­i­ties available to fund in the ar­eas. Do you have some po­ten­tial grant re­cip­i­ents for these funds in mind?

Hello Kerry. Build­ing on what Michael Dick­ens said, I now think the funds need to be more tightly speci­fied be­fore we can pick the most promis­ing re­cip­i­ents within each. For in­stance, imag­ine we have a ‘sys­temic change’ fund, pre­sum­ably a to­tal­ist sys­temic change fund would be differ­ent from a per­son-af­fect­ing, life-im­prov­ing one. It’s pos­si­ble they might con­sider the same things top tar­gets, but more work would be re­quired to show that.

Nar­row­ing down then:

Sup­pose we had life-im­prov­ing fund us­ing safe bets. I think char­i­ties like Strong Minds and Ba­sic Needs (men­tal health orgs) are good con­tenders, al­though I can’t com­ment on their or­gani­sa­tional effi­ciency.

Sup­pose we have a life-im­prov­ing fund do­ing sys­temic change. I as­sume this would be try­ing to bring about poli­ti­cal change via gov­ern­ment poli­cies, ei­ther at the do­mes­tic or in­ter­na­tional level. I can think of a few ar­eas that look good, such as men­tal health policy, in­creas­ing ac­cess to pain re­lief in de­vel­op­ing coun­tries, and in­ter­na­tional drug policy re­form. How­ever, I can’t name and ex­alt par­tic­u­lar orgs as I haven’t nar­rowed down to what I think the most promis­ing sub-causes are yet.

Sup­pose we had a life-im­prov­ing moon­shots fund. If this is go­ing to be differ­ent the one above, I imag­ine this would be look­ing for start ups, maybe a bit like EA Ven­tures did. I can’t think of any­thing rele­vant to sug­gest here apart from the start up I work on (the qual­ity of which I can’t hope to be ob­jec­tive about). Per­haps this fund could be look­ing at start­ing new char­i­ties too, rather than look­ing to fund ex­ist­ing ones.

I don’t think not know­ing who you’d give money to in ad­vance is a rea­son not to pur­sue this fur­ther. For in­stance, I would con­sider donat­ing to some type of moon­shots fund pre­cisely be­cause I had no idea where the money would go and I’d like to see some­one (else) try to figure it out. Once they’d made their we could build on their anal­y­sis and learn stuff.

I re­ally like the idea of do­ing more to iden­tify new po­ten­tial cause ar­eas.
Vet­ting is re­ally im­por­tant, but I’m wary of the idea of anoint­ing a spe­cific EA org with sole dis­cre­tion over vet­ting de­ci­sions. If pos­si­ble, demo­cratic vet­ting would be ideal (challeng­ing though such ar­range­ments can be).

I do see some ad­van­tages of keep­ing the num­ber of funds low at this amount of money mov­ing through be­cause it in­creases the chance that any one par­tic­u­lar fund will be able to sup­port a par­tic­u­larly promis­ing pro­ject that isn’t ap­pre­ci­ated by other donors.

Was think­ing that there could be a tie-in with Giv­ing What We Can’s My Giv­ing. You could tick a box to make your My Giv­ing pro­file pub­lic, and then have an­other box for peo­ple brows­ing to “copy this donor’s dis­tri­bu­tion of dona­tions” like some trad­ing web­sites (such as eToro) offer. Although they would not, un­for­tu­nately, come with tal­lies of ex­pected to­tal utilons pro­duced, there could be league ta­bles of most copied donors by num­ber of peo­ple copy­ing, and amount donated fol­low­ing their dis­tri­bu­tion.

I’m ex­cited about the idea of new funds. As a prospec­tive user, my prefer­ences are:

Limited /​ well-or­ganised choices. This is be­cause I, like many peo­ple, get over­whelmed by too many choices. For ex­am­ple, per­haps I could choose be­tween global poverty, an­i­mal welfare, and ex­is­ten­tial risks, and then choose be­tween op­tions within the cat­e­gory (eg “Low-Risk Global Poverty Fund” or “Food Se­cu­rity Re­search Fund”).

Trust­wor­thy fund man­agers /​ rea­son­able al­lo­ca­tion of funds. There are many rea­son­able ways to vet new funds, but ul­ti­mately I’m us­ing the ser­vice be­cause I don’t want to have to care­fully vet them my­self.

or we could add a long-term fu­ture fund that fo­cused on ar­eas other than AI-Safety.

+1 differ­en­ti­a­tion. A Fund speci­fi­cally for AI Safety would prob­a­bly have de­mand—I’d donate. Other Funds for other spe­cific GCRs could be cre­ated if there’s enough de­mand too.

A mild con­sid­er­a­tion against would be if there are fund­ing op­por­tu­ni­ties in the Long Term Fu­ture area that would benefit both AI Safety and the other GCRs, such as the cross-dis­ci­plinary Global Catas­trophic Risks In­sti­tute, and split­ting would make it harder for these to be funded, maybe?