How about "the set of choices made by an SI"? That's what
Singularitarianism is all about; making more intelligent decisions
through a proxy. The idea is that either they'll all make the same
choice, or that at least *some* choices will be excluded as being
objectively wrong.

Sigh... the trouble with discussing this subject is the verbal
contortions I have to go through to maintain correspondence between the
actual logic and our cognitive intuitions. (That's casting aspersions
on our cognition, not the theory. We're nuts; the logic is perfectly straightforward.)

What I meant is "distinctions useful for making the choice". That is,
distinctions which render some choices objectively better than others.
If you claim that this only applies within a pre-existing system, then I
just ask whether there are distinctions useful for choosing between
systems. Is there a sense in which blowing up a K-Mart is actually
_wrong_, not just a matter of taste? I don't know. Not knowing, it
seems to me that the results of assuming an objective result have a
higher priority than any results deriving from the assumption that it's
all a matter of taste.

I'm not interested in subjective distinctions. "Subjective" means
"evolution is pulling my strings." "Subjective" means "I don't want to
justify my assumptions." "Subjective morality" is as silly as
"subjective economics" or "subjective physics". I want rational
justifications. If you assume anything, I want a rational justification
for that. I want a chain of logic leading right back to a blank slate.
And now that I have it, I'm not settling for anything less.

What is the concrete difference between Externalism and utilitarianism?
In utilitarianism, the morality of happiness is assumed. In
Externalism, it has to be rationally justified. Why, exactly, does this
make Externalism *less* rational? Occam's Razor!

I think we're getting lost in the Great Maze of Words. Let's look at
what you need to do to program "objective morality" vs. "utilitarianism"
into an AI. In the second case, you need an initial, unjustified goal
labeled "happiness" with positive value. Using Externalism, you can
start from a blank state - all goals at zero value.

An immutable morality is no more religious than an immutable reality.
When science impinges on territory formerly held by religion, whether it
be the nature of reality or cosmology or even making choices, it makes
the issues scientific ones, it doesn't turn the science into mysticism.
To borrow Greg Egan.

I beg your pardon? Externalism is as rational, functional, and
utilitarian as you can get and still be a human being. Externalism is
what you get when you take utilitarianism and prune the unnecessary
assumptions. We're talking about a philosophy designed for AIs!