The last post looked at whether we could grow more clue­ful by in­ten­tional effort. It con­cluded that, for the fore­see­able fu­ture, we will prob­a­bly re­main clue­less about the long-run im­pacts of our ac­tions to a mean­ingful ex­tent, even af­ter tak­ing mea­sures to im­prove our un­der­stand­ing and fore­sight.

Given this state of af­fairs, we should act cau­tiously when try­ing to do good. This post out­lines a frame­work for do­ing good while be­ing clue­less, then looks at what this frame­work im­plies about cur­rent EA cause pri­ori­ti­za­tion.

The fol­low­ing only make sense if you already be­lieve that the far fu­ture mat­ters a lot; this ar­gu­ment has been made el­e­gantly el­se­where so we won’t re­hash it here.[1]

An anal­ogy: in­ter­stel­lar travel

Con­sider a space­craft, jour­ney­ing out into space. The oc­cu­pants of the craft are search­ing for a star sys­tem to set­tle. Promis­ing des­ti­na­tion sys­tems are all very far away, and the voy­agers don’t have a com­plete map of how to get to any of them. In­deed, they know very lit­tle about the space they will travel through.

To have a good jour­ney, the voy­agers will have to suc­cess­fully steer their ship (both liter­ally & metaphor­i­cally). Let’s use “steer­ing ca­pac­ity” as an um­brella term that refers to the ca­pac­ity needed to have a suc­cess­ful jour­ney.[2] “Steer­ing ca­pac­ity” can be bro­ken down into the fol­low­ing five at­tributes:[3]

The voy­agers must have a clear idea of what they are look­ing for. (In­tent)

The voy­agers must be able to reach agree­ment about where to go. (Co­or­di­na­tion)

The voy­agers must be dis­cern­ing enough to iden­tify promis­ing sys­tems as promis­ing, when they en­counter them. Similarly, they must be dis­cern­ing enough to ac­cu­rately iden­tify threats & ob­sta­cles. (Wis­dom)

Their craft must be pow­er­ful enough to reach the des­ti­na­tions they choose. (Ca­pa­bil­ity)

Be­cause the voy­agers travel through un­mapped ter­ri­tory, they must be able to see far enough ahead to avoid ob­sta­cles they en­counter. (Pre­dic­tive power)

This space­craft is a use­ful anal­ogy for think­ing about our civ­i­liza­tion’s tra­jec­tory. Like us, the space voy­agers are some­what clue­less – they don’t know quite where they should go (though they can make guesses), and they don’t know how to get there (though they can plot a course and make ad­just­ments along the way).

The five at­tributes given above – in­tent, co­or­di­na­tion, wis­dom, ca­pa­bil­ity, and pre­dic­tive power – de­ter­mine how suc­cess­ful the space voy­agers will be in ar­riv­ing at a suit­able des­ti­na­tion sys­tem. Th­ese same at­tributes can also serve as a use­ful frame­work for con­sid­er­ing which al­tru­is­tic in­ter­ven­tions we should pri­ori­tize, given our pre­sent situ­a­tion.

The ba­sic point

The ba­sic point here is that in­ter­ven­tions whose main known effects do not im­prove our steer­ing ca­pac­ity (i.e. our in­tent, wis­dom, co­or­di­na­tion, ca­pa­bil­ity, and pre­dic­tive power) are not as im­por­tant as in­ter­ven­tions whose main known effects do im­prove these at­tributes.

An im­pli­ca­tion of this is that in­ter­ven­tions whose effec­tive­ness is driven mainly by their prox­i­mate im­pacts are less im­por­tant than in­ter­ven­tions whose effec­tive­ness is driven mainly by in­creas­ing our steer­ing ca­pac­ity.

This is be­cause any ac­tion we take is go­ing to have in­di­rect & long-run con­se­quences that bear on our civ­i­liza­tion’s tra­jec­tory. Many of the long-run con­se­quences of our ac­tions are un­known, so the fu­ture is un­pre­dictable. There­fore, we ought to pri­ori­tize in­ter­ven­tions that im­prove the wis­dom, ca­pa­bil­ity, and co­or­di­na­tion of fu­ture ac­tors, so that they are bet­ter po­si­tioned to ad­dress fu­ture prob­lems that we did not fore­see.

What be­ing clue­less means for al­tru­is­tic prioritization

I think the steer­ing ca­pac­ity frame­work im­plies a port­fo­lio ap­proach to do­ing good – si­mul­ta­neously pur­su­ing a large num­ber of di­verse hy­pothe­ses about how to do good, pro­vided that each ap­proach main­tains re­versibil­ity.[4]

This ap­proach is similar to the Open Philan­thropy Pro­ject’s hits-based giv­ing frame­work – in­vest in many promis­ing ini­ti­a­tives with the ex­pec­ta­tion that most will fail.

Below, I look at how this frame­work in­ter­acts with fo­cus ar­eas that effec­tive al­tru­ists are already work­ing on. Other causes that EA has not looked into closely (e.g. im­prov­ing ed­u­ca­tion) may also perform well un­der this frame­work; as­sess­ing causes of this sort is be­yond the scope of this es­say.

To pri­ori­tize – bet­ter un­der­stand­ing what matters

In­creas­ing our un­der­stand­ing of what’s worth car­ing about is im­por­tant for clar­ify­ing our in­ten­tions about what tra­jec­to­ries to aim for. For many moral ques­tions, there is already broad agree­ment in the EA com­mu­nity (e.g. the view that all cur­rently ex­ist­ing hu­man lives mat­ter is un­con­tro­ver­sial within EA). On other ques­tions, fur­ther think­ing would be valuable (e.g. how best to com­pare hu­man lives to the lives of an­i­mals).

To pri­ori­tize – im­prov­ing foresight

Im­prov­ing fore­sight & pre­dic­tion-mak­ing abil­ity is im­por­tant for in­form­ing our de­ci­sions. The fur­ther we can see down the path, the more in­for­ma­tion we can in­cor­po­rate into our de­ci­sion-mak­ing, which in turn leads to higher qual­ity out­comes with fewer sur­prises.

Fore­cast­ing abil­ity can definitely be im­proved from baseline, but there are prob­a­bly hard limits on how far into the fu­ture we can ex­tend our pre­dic­tions while re­main­ing be­liev­able.

To pri­ori­tize – re­duc­ing ex­is­ten­tial risk

Re­duc­ing ex­is­ten­tial risk can be framed as “avoid­ing large ob­sta­cles that lie ahead.” Avoid­ing ex­tinc­tion and “lock-in” of sub­op­ti­mal states is nec­es­sary for re­al­iz­ing the full po­ten­tial benefit of the fu­ture.

To pri­ori­tize – in­crease the num­ber of well-in­ten­tioned, highly ca­pa­ble people

Well-in­ten­tioned, highly ca­pa­ble peo­ple are a scarce re­source, and will al­most cer­tainly con­tinue to be highly use­ful go­ing for­ward. In­creas­ing the num­ber of well-in­ten­tioned, highly ca­pa­ble peo­ple seems ro­bustly good, as such peo­ple are able to di­ag­no­sis & co­or­di­nate to­gether on fu­ture prob­lems as they arise.

In a differ­ent vein, psychedelic ex­pe­riences hold promise as a treat­ment for treat­ment-re­sis­tant de­pres­sion, and may also im­prove the in­ten­tions of highly ca­pa­ble peo­ple who have not re­flected much about what mat­ters (“the bet­ter­ment of well peo­ple”).

EA fo­cus ar­eas to de­pri­ori­tize, maybe

The steer­ing ca­pac­ity frame­work sug­gests de­pri­ori­tiz­ing an­i­mal welfare & global health in­ter­ven­tions, to the ex­tent that these in­ter­ven­tions’ effec­tive­ness is driven by their prox­i­mate im­pacts.

Un­der this frame­work, pri­ori­tiz­ing an­i­mal welfare & global health in­ter­ven­tions may be jus­tified, but only on the ba­sis of im­prov­ing our in­tent, wis­dom, co­or­di­na­tion, ca­pa­bil­ity, or pre­dic­tive power.

To de­pri­ori­tize, maybe – an­i­mal welfare

To the ex­tent that an­i­mal welfare in­ter­ven­tions ex­pand our civ­i­liza­tion’s moral cir­cle, they may hold promise as in­ter­ven­tions that im­prove our in­ten­tions & un­der­stand­ing of what mat­ters (the Sen­tience In­sti­tute is do­ing work along this line).

How­ever, fol­low­ing this frame­work, the case for an­i­mal welfare in­ter­ven­tions has to be made on these grounds, not on the ba­sis of cost-effec­tively re­duc­ing an­i­mal suffer­ing in the pre­sent.

This is be­cause the an­i­mals that are helped in such in­ter­ven­tions can­not help “steer the ship” – they can­not con­tribute to mak­ing sure that our civ­i­liza­tion’s tra­jec­tory is headed in a good di­rec­tion.

To de­pri­ori­tize, maybe – global health

To the ex­tent that global health in­ter­ven­tions im­prove co­or­di­na­tion, or re­duce x-risk by in­creas­ing so­cio-poli­ti­cal sta­bil­ity, they may hold promise un­der the steer­ing ca­pac­ity frame­work.

How­ever, the case for global health in­ter­ven­tions would have to be made on the grounds of in­creas­ing co­or­di­na­tion, re­duc­ing x-risk, or im­prov­ing an­other steer­ing ca­pac­ity at­tribute. Ar­gu­ments for global health in­ter­ven­tions on the grounds that they cost-effec­tively help peo­ple in the pre­sent day (with­out con­sid­er­a­tion of how this bears on our fu­ture tra­jec­tory) are not com­pet­i­tive un­der this frame­work.

Conclusion

In sum, I think the fact that we are in­tractably clue­less im­plies a port­fo­lio ap­proach to do­ing good – pur­su­ing, in par­allel, a large num­ber of di­verse hy­pothe­ses about how to do good.

In­ter­ven­tions that im­prove our un­der­stand­ing of what mat­ters, im­prove gov­er­nance, im­prove pre­dic­tion-mak­ing abil­ity, re­duce ex­is­ten­tial risk, and in­crease the num­ber of well-in­ten­tioned, highly ca­pa­ble peo­ple are all promis­ing. Global health & an­i­mal welfare in­ter­ven­tions may hold promise as well, but the case for these cause ar­eas needs to be made on the ba­sis of im­prov­ing our steer­ing ca­pac­ity, not on the ba­sis of their prox­i­mate im­pacts.

Thanks to mem­bers of the Mather es­say dis­cus­sion group and an anony­mous col­lab­o­ra­tor for thought­ful feed­back on drafts of this post. Views ex­pressed above are my own. Cross-posted to LessWrong & my per­sonal blog.

[2]: I’m grate­ful to Ben Hoff­man for dis­cus­sion that fleshed out the “steer­ing ca­pac­ity” con­cept; see this com­ment thread.

[3]: Note that this list of at­tributes is not ex­haus­tive & this metaphor isn’t perfect. I’ve found the space travel metaphor use­ful for think­ing about cause pri­ori­ti­za­tion given our un­cer­tainty about the far fu­ture, so am de­ploy­ing it here.

[4]: Main­tain­ing re­versibil­ity is im­por­tant be­cause given our clue­less­ness, we are un­sure of the net im­pact of any ac­tion. When un­cer­tain about over­all im­pact, it’s im­por­tant to be able to walk back ac­tions that we come to view as net nega­tive.

[5]: I’m not sure of how to pri­ori­tize these things amongst them­selves. Prob­a­bly im­prov­ing our un­der­stand­ing of what mat­ters & our pre­dic­tive power are high­est pri­or­ity, but that’s a very weakly held view.

I like this post Milan, I think it’s the best of your se­ries. I think that you rightly picked a very im­por­tant topic to write about (clue­less­ness) that should re­ceive more at­ten­tion than it cur­rently does. I do have some com­ments:

Although I ad­mire new ways to think about pri­ori­ti­sa­tion, I have two wor­ries:
Con­cep­tual dis­tinc­tion. Wis­dom and pre­dic­tive power seem not con­cep­tu­ally dis­tinct. Both are about our abil­ity to iden­ti­fy­ing and pre­dict­ing the prob­a­bil­ity of good and bad out­comes. In­tent also seems a lit­tle tan­gled up in wis­dom, al­though I can see that we want to seper­ate those. Fur­ther­more, in­tent in­fluences co­or­di­na­tion ca­pa­bil­ity: the more differ­ent the in­ten­tions are of a pop­u­la­tion, the more difficult co­or­di­na­tion be­comes.

This cre­ates the sec­ond worry that this model adds only one di­men­sion (In­tent) to the 3-di­men­sional model of Bostrom’s Tech­nol­ogy [Ca­pac­ity] - In­sight [Wis­dom] - Co­or­di­na­tion. Do you think this in­creases to use­ful­ness of the model enough? The ad­van­tage of Bostrom’s model is that it al­lows for differ­en­tial progress (wis­dom > co­or­di­na­tion > ca­pac­ity), while you don’t spec­ify the in­ter­play of at­tributes. Are they sup­posed to be mul­ti­plied, or are some com­bi­na­tions bet­ter than oth­ers, or do we want differ­en­tial progress?

I was a bit con­fused that you write about things to pri­ori­tise, but don’t re­fer back to the 5 at­tributes of the steer­ing ca­pac­ity. Some re­late more strongly to spe­cific at­tributes, and some at­tributes are not dis­cussed much (co­or­di­na­tion) or at all (ca­pa­bil­ity).

Fur­ther our un­der­stand­ing of what matters

This seems to be In­tent in your frame­work. I to­tally agree that this is valuable. I would call this moral (or more pre­cisely: ax­iolog­i­cal) un­cer­tainty, and peo­ple work on this out­side of EA as well. By the way, be­sides re­solv­ing un­cer­tainty, an­other path­way is to im­prove our meth­ods to deal with moral un­cer­tainty. (Like MacAskill ar­gues for)

Im­prove governance

I am not sure to which this con­cept this re­lates to, though I sup­pose it is Co­or­di­na­tion. I find the dis­cus­sion a bit shal­low here as it dis­cusses only in­sti­tu­tions, and not the co­or­di­na­tion of in­di­vi­d­u­als in e.g. the EA com­mu­nity, or the co­or­di­na­tion be­tween na­tion states.

Im­prove pre­dic­tion-mak­ing & foresight

This seems to be the at­tribute pre­dic­tive power. I agree with you that this is very im­por­tant. To a large ex­tent, this is also what sci­ence in gen­eral is aiming to do: im­prov­ing our un­der­stand­ing so that we can bet­ter pre­dict and al­ter the fu­ture. How­ever, straight up fore­cast­ing seems more ne­glected. I think this could also just be called “re­duc­ing em­piri­cal un­cer­tainty”? If we call it that, we can also con­sider other ap­proaches, such as re­search­ing effects in com­plex sys­tems.

Re­duce ex­is­ten­tial risk

I’m not sure this was in­tended to re­late to a spe­cific at­tribute. Guess not.

In­crease the num­ber of well-in­ten­tioned, highly ca­pa­ble people

This seems to re­late mostly to “In­tent”as well. I wanted to re­mark that this can ei­ther be done by in­creas­ing ca­pa­bil­ity and knowl­edge of well-in­ten­tioned peo­ple, or by im­prov­ing in­ten­tions of ca­pa­ble (and knowl­edge­able) peo­ple. My ob­ser­va­tion is that so far, the fo­cus has been on the lat­ter in term of growth and out­reach, and only some effort has been ex­pended to de­velop the skills of effec­tive al­tru­ists. (Although this is noted as a com­par­a­tive ad­van­tage for EA Groups)

Lastly, I wanted to re­mark that hits-based giv­ing does not im­ply a port­fo­lio ap­proach in my opinion. It just im­plies be­ing more or less risk-neu­tral in al­tru­is­tic efforts. What drives the di­ver­sifi­ca­tion in OPP’s grants seems to be wor­ld­view di­ver­sifi­ca­tion, op­tion value, and the pos­si­bil­ity that high-value op­por­tu­ni­ties are spread over cause ar­eas, rather than con­cen­trated in one cause area. I think what would sup­port the con­clu­sion that we need to di­ver­sify could be that we need to hit a cer­tain value on each of the at­tributes oth­er­wise the pro­ject fails (a bit like that power-laws arise from suc­cess need­ing ABC in­stead of A+B+C).

All in all, an im­por­tant pro­ject, but I’m not sure how much novel in­sight it has brought (yet). This is quite similar to my own ex­pe­rience in that I wrote a philos­o­phy es­say about clue­less­ness and ar­rived at not-so-novel con­clu­sion. Let me know if you’d like to read the es­say :)

I’m us­ing “pre­dic­tive power” as some­thing like “abil­ity to see what’s com­ing down the pipe” and “wis­dom” as some­thing like “abil­ity to as­sess whether what’s com­ing down the pipe is good or bad, ac­cord­ing to one’s value sys­tem.”

On your broader point, I agree that these at­tributes are all tan­gled up in each other. I don’t think there’s a use­ful way to draw clean dis­tinc­tions here.

I was a bit con­fused that you write about things to pri­ori­tise, but don’t re­fer back to the 5 at­tributes of the steer­ing ca­pac­ity.

This is a good point, I’ll think about this more & get back to you.

quite similar to my own ex­pe­rience in that I wrote a philos­o­phy es­say about cluelessness

I would add some­thing likes “Sen­si­tivity” to the list of at­tributes needed to nav­i­gate the world.

This is differ­ent from Pre­dic­tive Power. You can imag­ine two ships, with the ex­act same com­pute power and Pre­dic­tive Power. One with cam­eras on the out­side and long range sen­sors, one blind with­out. You’d ex­pect the first to do a lot bet­ter mov­ing about the world

In Effec­tive Altru­ism’s case I sus­pect this would be things like the ba­sic em­piri­cal re­search about the state of the world and the things im­por­tant to their goals.