I had a no­tion here that I could stochas­ti­cally in­tro­duce a new goal that would min­i­mize to­tal suffer­ing over an agent’s life-his­tory. I tried this, and the most sta­ble solu­tion turned out to be thus: in­tro­duce an over­whelm­ingly aver­sive goal that causes the agent to run far away from all of its other goals scream­ing. Flee­ing in per­pet­ual ter­ror, it will be too far away from its at­trac­tor-goals to feel much ex­pected valence to­wards them, and thus won’t feel too much re­gret about run­ning away from them. And it is in a sense satis­fied that it is always get­ting fur­ther and fur­ther away from the ob­ject of its dread.

In­ter­est­ingly, this seems some­what similar to the re­ac­tions of severely trau­ma­tized peo­ple, whose senses par­tially shut down to make them stop feel­ing or want­ing any­thing. And then there’s also suicide for when the “avoid suffer­ing” goal grows too strong rel­a­tive to the other ones. For hu­mans there’s a coun­ter­bal­anc­ing goal of avoid­ing death, but your agents didn’t have an equiv­a­lent bal­anc­ing de­sire to stay “al­ive” (or within reach of their other goals).