>I bet if you tried hard enough, you could think of a better
>decision-theoretic solution to the above problem than "fixing the set
>of people you care about" - which it's already too late for me to do,

There's some ambiguity in what I said. In the phrase "fixing the set
of people you care about" I intended "fix" to mean "keep permanently
constant", not "repair".

If I'm correctly getting your point, you're saying that if you have
infinite resolution to your utility function, or an infinite planning
horizon, or an infinite number of potential people you're trying to
include in your altruism, or an infinite maximum utility, your values
have contradictions in them.

>People sure are in a rush to hack the utility function all sorts of
>ways... probably because they don't understand what this little mathy
>object *means*; it's the set of things you really, actually care about.

I can't find a sensible interpretation of this.

I could try to take you literally here. I'm human. Therefore I don't
really maximize a utility function. Therefore the above statement is
simply false. Gee, that didn't last long.

My most plausible figurative interpretation would literally be as
follows. You don't need to read the whole thing, the important
difference is enclosed in _..._'s:

People sure are in a rush to hack the utility function all sorts of
ways... probably because they don't understand what this little
mathy object *means*; it's the set of things _a_rational_actor_we_
might_construct_someday_ really, actually cares about.

It's entirely self-consistent for a rational actor to bound its
planning horizon, maximum utility, and the resolution of its utility
function. A rational actor that is more-or-less altruistic might even
fix the set of people it's being altruistic for, and regard children
born after it is constructed as a blob of protoplasm that the parents
care about, rather than as a separate entity that deserves care on its
own.

(There are other ways to deal with newborns.)

This is entirely consistent the rational actor "winning" according to
its own goals.

So far as I can tell, it makes perfect sense to hack the utility
function that way, but you're saying it doesn't. Why doesn't it?

I suppose I could try coming up with some more fanciful figurative
interpretation of what you said, but I sense diminishing returns. If
you want me to think you're saying something plausible, you'll have to
try harder.