Fund

Our fund helps you give more effectively with minimal time investment. It works similarly to a mutual fund, but the fund managers aim to maximize the impact of your donations instead of your investment returns. They use the pooled donations to make grants to promising projects and individuals whose work will contribute most to the mission of the fund.

Mission

The mission of the REG Fund (a.k.a. CLR Fund) is to support research and policy efforts to prevent the worst technological risks facing our civilization. The potentially transformative nature of artificial intelligence poses a particular challenge that we want to address. We want to prevent a situation similar to the advent of nuclear weapons, in which careful reflection on the serious implications of this technology took a back seat during the wartime arms race. As our technological power grows, future inventions may cause harm on an even larger scale—unless we act early and deliberately.

Why donate to this fund

Giving through a fund can increase the impact of your donation in several ways:

Unique opportunities. Some funding opportunities, such as academic grants, are simply not open to most individual donors, unless they pool their contributions in a fund or donor lottery.

Economies of scale. Finding the best funding opportunities is difficult and time consuming, since there are a lot of different considerations and relevant research. A fund allows many donors with limited time to delegate the relevant work to the fund managers. They, in turn, can invest significant amounts of time in order to identify the best recipients for many people at once, making the process far more efficient.

Expert judgment. The fund managers have built up knowledge in the relevant domains and consult with technical experts where appropriate. They have thought about the long-term effects of different philanthropic interventions for years. Using expert judgment might be particularly important in this domain since unlike for other cause areas, no charity evaluator such as GiveWell exists yet for selecting organizations dedicated to improving the long-term future.1

You should give to this fund in particular if:

you value future lives as much as current ones, and you expect most individuals to exist in the long-term future;

you think there is a significant chance that advanced artificial intelligence will shape the future in profound ways and cause harm on an unprecedented scale;

you believe there are actions we can take right now to mitigate these risks;

you are particularly concerned about worst-case scenarios and s-risks.

Fund Management

Lukas Gloor is responsible for prioritization at the Center on Long-Term Risk and coordinates our research with other organizations. He conceptualized worst-case AI safety, and helped coin and establish the term s-risks. Currently, his main research focus is on better understanding how different AI alignment approaches affect worst-case outcomes. He also helped found REG in 2014 and is a recreational poker player.Brian Tomasik has written prolifically and comprehensively about ethics, animal welfare, artificial intelligence, and the long-term future from a suffering-focused perspective. His ideas have been very influential in the effective altruism movement, and he helped found the Center on Long-Term Risk, which he still advises. He graduated from Swarthmore College in 2009, where he studied computer science, mathematics, statistics, and economics.Jonas Vollmer is the Co-Executive Director of the Center on Long-Term Risk where he is responsible for setting the strategic direction, management, as well as communications with the effective altruism community. He holds degrees in medicine and economics with a focus on health economics and development economics. He previously served on the boards of several charities, is an advisor to the EA Long-term Future Fund, and played a key part in establishing the effective altruism movement in continental Europe.

Grantmaking Process

Grant decisions are made by a simple majority of the fund managers.

Recipients may be charitable organizations, academic institutions, or individuals.2

Grants are made every six to twelve months. We invest funds that we are not going to allocate soon.3

Past Grants

2019

We made a grant of £66,000 ($81,503 at the time of conversion) to Dr. Arif Ahmed to free him from his teaching duties for a year. Ahmed is a University Reader in Philosophy at the University of Cambridge. His previous work includes the book Evidence, Decision and Causality and an academic conference entitled “Self-prediction in Decision Theory and Artificial Intelligence,” with contributions from technical AI safety researchers. This teaching buy-out will allow Ahmed to research evidential decision theory (EDT) further and, among other things, write another academic book on the topic.

We see this grant as a contribution to foundational research that could ultimately become relevant to AI strategy and technical AI safety research. As described by Soares and Fallenstein (2015, p. 5; 2017) and the “Acausal reasoning” section of CLR’s research agenda, advancing our understanding of non-causal reasoning and the decision theory of Newcomblike problems could enable further research on ensuring more cooperative outcomes in the competition among advanced AI systems. We also see value in raising awareness of the ways in which causal reasoning falls short, especially in the context of academic philosophy, where non-causal decision theory is not yet established.

Due to the foundational nature and philosophical orientation of this research, we remain uncertain as to whether the grant will achieve its intended goal and the supported work will become applicable to AI safety research. That said, we believe that Ahmed has an excellent track record and is exceptionally well-suited to carry out this type of research, especially considering that much work in the area has been non-academic thus far. In addition to the above, we also think that it is valuable for the REG Fund (and effective altruist grantmakers, in general) to develop experience with academic grantmaking.

Tobias Pulver applied for a two-year scholarship of CHF 63,000 ($63,456 at the time of conversion) to pursue a Master’s degree in Comparative and International Studies at ETH Zurich. This is a political science degree that allows focusing on international relations, security policy, and technology policy. The majority of this grant will be used to cover living costs in Zurich.

We see this grant as an investment in Pulver’s career in AI policy research and implementation. We are impressed by Pulver’s altruistic commitment and interest in reducing s-risks, his academic track record, his strategic approach to his career choice, and his admission to a highly competitive Master’s program at a top university. Pulver recently pursued an independent research project to explore his fit for AI policy research, which we thought was sound. He intends to keep engaging with EA-inspired AI governance research by applying to relevant fellowships at EA organizations.

Pulver is a former staff member of the Center on Long-Term Risk who decided to transition into AI governance due to personal fit considerations. Two out of three fund managers have worked with Pulver before and therefore have high confidence in the above assessment (and the third fund manager was also in favor). After thinking carefully about the potential downsides of a conflict of interest, we believe these are outweighed by the benefits of detailed knowledge in this particular case. To reduce the risk of favoring grantees we already know, our fund managers are investing time and resources to get to know many potential grantees.

We have a generally favorable view of WAI as an organization, though we did not conduct a thorough evaluation. Their research proposal prominently mentioned various considerations that explore the relationship between long-termism and wild-animal welfare research, but those considerations were not yet well developed. We also thought that some of their expectations regarding the impact of their project were too optimistic. That said, we are excited to see more research into the tractability, reversibility, and resilience of wild-animal welfare interventions.

We do not believe that research on wild-animal welfare contributes to the REG Fund’s main priorities, but we think it might help improve concern for suffering prevention. While we might not make any further grants in the area of wild-animal welfare, we decided in favor of this grant due, in part, to the currently large amount of funding available.

Note that WAI was created through a merger that involved a largely independent project previously housed at the Effective Altruism Foundation, our parent organization.

As part of the REG Fund’s first open application round, Miles Tidmarsh, Vasily Kuznetsov, Paolo Bova, and Jonas Emanuel Müller applied for a grant to carry out a research project that aims to explore the possibility of cooperation in defusing races to build powerful technologies such as artificial intelligence, extending the Racing to the Precipice model using an agent-based modeling methodology.

We decided to fund only a fraction of the requested grant amount ($20,000 instead of $75,000) and see this grant primarily as an investment in the grantees’ careers, learning, and exploration of further research projects, rather than as supporting the research project they submitted.

When investigating this grant, we sought the opinions of internal and external advisors. Many liked the general research direction and perceived the team to be competent. One person who reviewed their project in more detail reached a tentative negative conclusion and emphasized that the project team might benefit from more research experience. Another evaluator was tentatively skeptical that agent-based models can be applied usefully to AI races at this point. They recommended that the grantees look more into ensuring that the research will have a connection to real-world problems. We also observed that the team repeatedly sought external input, but did not seem to engage with critical feedback as productively as other grantees.

That said, we have been impressed by Jonas Emanuel Müller’s strong long-term commitment to effective altruism (in particular, his successful earning-to-give career and attempt to transition into direct work), his drive to understand the literature on AI strategy and s-risks, and his unusual awareness of the potential risks of accidental harm.

For these reasons, we decided to make a smaller grant than requested and encouraged the grantees to consider different lines of research. We think this grant has low downside risk and could potentially result in valuable future research projects. Some of our fund managers also think we might be wrong with our pessimistic assessment, generally like to support a diverse range of approaches and perspectives, and think that this grant might enable a valuable learning experience even if the grantees decide to continue their current project without incorporating our suggestions.

We think it is probably very difficult to produce significant new insights through such foundational research. We think that applying standard models to analyze the specific scenarios outlined in the research proposal might turn out to be valuable, though we also do not think that doing so is a priority for reducing s-risks.

We also see this grant as an investment in Sevilla’s career as a researcher. We were impressed by a paper draft on the relevance of quantum computing to AI alignment that Sevilla is co-authoring and might have decided against this grant otherwise. We think it is unlikely that Sevilla will make s-risks a primary focus of his research, but we hope that he might make sporadic contributions to the REG Fund’s research priorities.

As part of the REG Fund’s first open application round, Riley Harris applied for travel and conference funding to attend summer school programs and conferences abroad. Harris is a talented Master’s student at the University of Adelaide interested in pursuing an academic career in economics.

We see this grant as an investment in Harris’s potential academic career. His current interest is in game theory and behavioral economics, with potential applications in AI governance.

While we have been somewhat impressed by Harris’s academic track record and interest in effective altruism and AI risk, one fund manager felt unsure about his ability to get quickly up to speed with the research on s-risk, pursue outstanding original research, and convey his thinking clearly. We hope that this grant will help Harris determine whether an economics PhD is a good personal fit for him.

2018

We made a grant to Daniel Kokotajlo to free him from his teaching duties for a year. He is currently pursuing a PhD in philosophy at the University of North Carolina at Chapel Hill. The grant will double the hours he can dedicate to his research. His work will focus mainly on improving our understanding of acausal interactions between AI systems. We want to learn more about whether such acausal interactions are possible and what they imply for the prioritization of effective altruists. We believe this area of research is currently neglected because only a handful of people have done scholarly work on this topic, and many questions are still unexplored. We were impressed by Kokotajlo’s previous work and his research proposals and therefore believe that he has the skills required to make progress on these questions.

We made a grant to Rethink Priorities for implementing a survey designed to study the population-ethical views of the effective altruism community. More common knowledge about values within the effective altruism community will make moral cooperation easier. There is also a chance that a more open discussion of fundamental values will lead some members of the community to adjust their prioritization in a way they endorse. The grant allows Rethink Priorities to contract David Moss. He has experience running and analyzing the SHIC survey and the 2015 Effective Altruism Survey. We have reason to believe that the project will be well executed. It is unlikely that this survey would have been funded by anybody else.
Rethink Priorities will also use part of the grant to conduct a representative survey on attitudes towards reducing the suffering of animals in the wild. While we do not think this is as valuable as their descriptive ethics project, the gathered information will likely still result in important strategic insights for a cause area we are very sympathetic towards. This survey will also be led by David Moss, in collaboration with academics at Cornell University.

2017

Future of Humanity Institute: $56,460

Machine Intelligence Research Institute: $29,625

Animal Ethics: $10,000

2016

Machine Intelligence Research Institute: $117,561

The Humane Slaughter Association: $22,456

Animal Ethics: $21,831

The Swiss Vegan Society: $10,826

Center for Effective Vegan Advocacy: $10,673

Nonhuman Rights Project: $10,673

2015

Machine Intelligence Research Institute: $44,878

Animal Ethics: $20,594

Centre for Effective Altruism: $14,469

Deworm the World Initiative: $14,469

The Great Ape Project: $14,205

New Incentives: $10,224

Nonhuman Rights Project: $10,224

Center For Applied Rationality: $5,584

2014

Against Malaria Foundation: $10,138

Centre for Effective Altruism: $10,138

Center For Applied Rationality: $7,628

GiveDirectly: $7,628

1 The Open Philanthropy Project makes grants in this area, but they only publish very few rigorous analyses or comparative reviews.

2 Due to conflicts of interest, we will not make any grants to the Effective Altruism Foundation (our parent organization) or its affiliate projects.

(o

3 We invest funds exceeding 9-12 months of expected grantmaking expenses in the global stock market in accordance with the Effective Altruism Foundation’s investment policy to create capital growth for the fund. Contributions made before the policy’s announcement in December 2019 are exempt.

LEARN MORE

Sign up to receive our guide on effective giving or learn more about our donation advice service. We might also contact you directly regarding our campaigns.