Thanks for this. I'd like to ask you the same question I'm asking others in this thread.

I do wonder about the prospect of 'solving' extinction risk. Do you think EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified? I'm not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?

Thanks for this. I do wonder about the prospect of 'solving' extinction risk. Do you think EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified? I'm not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?

Thanks. I do wonder though if EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified. I'm not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?