I’ve found a few meta-rules to be handy in working out the
rules-of-thumb. Aiming for the overall goal of avoiding the extinction
of sapience in the universe could be considered one; another is that
it’s generally infeasible for there to be one rule that applies to
myself and another rule that applies to everyone else; and yet another
is that if there’s a way for one person to end up worse than the other
as the result of an interaction, it’s safe to assume that I’m more
likely than not to end up on the worse side of things (ie, my ‘not as
smart as I like to think’ quote), so it’s worthwhile to try ensuring
that the worse that can happen is the best possible. (This latter
meta-rule has, in the present-day, tended to nudge me in the direction
of favouring labour over mercantalist oligarchs, though I haven’t
signed up with the IWW quite yet.)

…

As of a couple of years ago, one of the meta-rules I relied on most
heavily was “I’m a selfish bastard, but I’m a /smart/ selfish bastard
interested in my /long-term/ self-interest.” (This meta-rule has since
become, at best, secondary to preventing sapience’s extinction, if for
no other reason than if all sapience dies out, I would, too.) This
resulted in what I called the “Trader’s Definition” for personhood,
and for granting whatever rights are inherent in personhood: if some
entity can choose whether or not to exchange a banana for a backrub,
or some programming for some playtime, then it’s in my own
self-interest to have everyone treat that entity /as if/ it were a
person. Whether or not it meets any particular philosophical
definitions of self-awareness, the mechanisms that underlie the
efficiencies of a competitive marketplace don’t seem to care much
whether any particular economic agent has any particular level of
consciousness. This idea also meets some of the other meta-rules, as
it’s a rule that applies as equally to me as to anyone else; and even
if some enhanced transhuman thinks only transhumans should have
rights, the principle of comparative advantage means that even if such
a transhuman is better than me at everything, I can still contribute
to the overall economy through the principle of comparative advantage,
and thus still should receive enough rights to allow me to be a
participant in such economy.

This definition also allows reasonably simple extensions to decide
about children and adults with disabilities. The former still have the
potential to become economically-functioning adults, so it’s
worthwhile to ensure they have enough rights to maximize the chances
that they will do so. (Even fetuses can be considered under this
criteria, though the mother’s rights also have to be considered, and
can easily outweigh the rights of the fetus.) The latter is a
situation where any individual has a chance of ending up, so in order
to ensure your own future is as comfortable as possible, it’s
reasonable to grant such people enough rights to allow a life with as
much dignity as possible. In general, animals are unable to make
choices, and humans generally don’t turn into them, so there’s no real
impetus to give them any rights. (However, as humans do have mirror
neurons that allow them to subjectively feel what they think other
beings are feeling, including animals, there does exist an impetus to
reduce animal suffering in order to reduce human suffering, but that’s
not quite the same thing, and there are plenty of other forms of human
suffering which are a higher priority.)

With a bit of squinting, a number of potential technologies can be
viewed through this lens, as well. A classic SF trope is to turn
ordinary animals into uplifted animals to act as a new servant class;
if it really does become possible to upgrade a dog into something
which can do a human-level job, such a creature would almost by
definition be able to do all manner of human-level tasks, which makes
it worthwhile to let them figure out which job gives them the best
comparative advantage, freeing humans to work at whatever gives /them/
the best comparative advantage, ending up with everyone’s lives being
better all around.

I’ve recently read a book which has given me a further perspective on
all of the above: “The Dictator’s Handbook: Why Bad Behavior is Almost
Always Good Politics”, by Bruce Bueno de Mesquita and Alastair Smith.
Their thesis is that all political organizations can be sorted by how
many people the leaders depend on to stay in power; and one of their
conclusions is that once such leaders require the support of a
significant part of the population, their goals shift from satisfying
their inner circle by essentially bribing them to satisfying the
larger public by implementing public policy that benefits their outer
circle. A further conclusion is that even in a nominally democratic
system, the particular details of the election process may mean that
the [potential] leaders only have to pay attention to a surprisingly
small part of the overall electorate; and thus, in order to induce the
[potential] leaders to create policies that benefit as much of the
public as possible, it’s worthwhile to try to work on certain forms of
electoral reform. Lessig’s “Rootstrikers” project seems to be one of
the best available groups working on this idea.

To put this in more concrete terms, and relate it to the above
discussion on rights: If a first-past-the-post electoral system is in
place, and certain districts are almost certain to vote one way or
another, then the campaigning politicians have very little incentive
to offer those districts any benefits, compared to the benefits they
offer to swing districts. And even within those swing districts, there
are various blocs to court, some of which may already be committed to
one party or another. The incentives faced by the leaders are to focus
only on improving the lives of those groups who might make a
significant difference to their re-election campaign. There’s no
particular incentive to those leaders to give any rights at all to,
say, uplifted dogs, unless doing so gives them a competitive
advantage; either by improving the lives of the swing voters they’re
focusing on, or if it seems likely that giving said uplifts the right
to vote will give their own campaign more votes than the opponent. The
main benefit gained by giving rights to uplifts is the overall
improvement of the economy, which is an extremely broad-based and
generalized improvement, and doesn’t do very much to improve the lives
of, say, the porn-farmers’ lobby; and so the way which seems to
maximize the odds of ensuring that uplifts receive such rights is to
maximize the number of people any particular politician has to court
in order to win an election, such as by working on cutting down on
gerrymandering.

This is a sort of preface, but I feel it’s relevant; I’ve had the occasional discussion which started at an end-point and had to work, post-by-post, back to these foundational assumptions. It’s likely to save time to just write it all down at once, and then, hopefully, discuss any particular areas of disagreement.

I currently use two general axioms to support all my other thinking. One, that math and logic are going to remain consistent: 1+1 isn’t going to stop equaling 2 any time soon. And two, that the evidence of my senses contain at least some partially accurate connection to an objective universe existing outside my own mind. I haven’t been able to figure out any way to think useful thoughts if either of these aren’t assumed to be true; and assuming just these two things lets me work out, step by step, just about everything most people take for granted as ‘axiomatic’.

Which leads me to: The truth is important. Knowing true things lets you travel to the moon, cure diseases, and communicate across the world in moments.

There are different ways to figure out what’s true and what’s not. Some ways are better than others. It’s possible to figure out which ways do better, by trying them out and seeing how well their statements fare against actual observations.

After looking at a /lot/ of such ways, the best way I’ve been able to find to identify true things with the greatest accuracy is something called ‘Solomonoff Induction’, or occasionally ‘Kolmogorov Complexity’ or ‘Minimum Message Length’. A very rough description of it is a mathematical formalization of Occam’s Razor. About the closest that we as humans can approach to this method of reasoning is Bayesian induction; or, more usually, a qualitative approximation of Bayesianism, such as the various social guards of the scientific method.

A particular subdetail of this is that there seems to be no significant support for the hypothesis that selfhood has a non-material component. That is: there’s no such thing as a ‘soul’. Mind is what brain does; damaging certain areas of the brain leads to generally predictable deficits in cognition. Self-hood seems to be inherent in the various patterns of connections within a brain.

Thus, it seems plausible that if a given pattern of connections can be reproduced, in a format that updates in the same way that another set was patterned and updated, then there is reasonable grounds to believe that the reproduction will have the same sense of self-hood as the original did.

The practical consequences about which versions of any given ‘self’ are the ones which are obligated to pay a debt are currently a philosophical or science-fictional entertainment. (Well, I find them entertaining, at least.)

If any of the above roughly-described reasoning doesn’t seem to add up, please let me know, in as much detail as possible, so I can try to figure out where the error is.

I’m sitting in the evening dark, staring at a woodfire and listening for night animals…
… On a chair made of metal and fabric, foldable and so light there was no reason /not/ to carry it;
… With a chemical glow stick hanging around my neck, shining steadily for a couple of hours;
… Getting ready to settle into a tent do light and full of warmth I could carry it for weeks and be happy;
… And entering this message into a machine even I can’t describe the full intricacy of, to be sent through the air to be read by anyone in the whole world who cares to read it.

Given the amount of use my previous PGP/GnuPG key got, and how much of an annoyance it would be to dig up the secret half of that keypair after so long; I’ve fired up a new encryption key, and submitted it to the keyserver cloud. The key ID is 0x2B26D0C3, if you want to look for it (or you could just pop over to http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x1ACA36542B26D0C3), such as if you want to compare it to this blog post.

A few years ago, I came up with the idea of the ‘_____’ Spores for the Orion’s Arm SF universe, in which a software intelligence copied itself many times and spread itself as widely as possible across space and time. Part of that involved coming up with some ways in which different copies might help each other; though since I didn’t actually know what methods would be best, I mostly just came up with the terms ‘Dividual Interaction Protocols‘ and ‘Dividual Naming Schema‘ and handwaved most of the details.

More recently, after reading “The Dictator’s Handbook: Why Bad Behavior is Almost Always Good Politics” by Bruce Bueno de Mesquita and Alastair Smith, I’ve had an improved understanding of the incentives faced by people with political power, and how those incentives lead such people to make choices which improve their relative standing compared to their nearest competitors even when those choices cause overall harm to society as a whole, making even the powerful poorer than they otherwise would have been. Brin and Stross have been talking about various aspects of the modern variations of oligarchy, but the basic conclusion is easy to find: oligarchical families start with an advantage in wealth and power, and try to increase that advantage for their own descendants’ benefits, whether or not that comes at the expense of others. Since such oligarchs form a vanishingly small proportion of the population, it’s in almost everybody’s best interests to foster competitive capitalism (as opposed to mercantilism, sometimes called corporatism), whether the oligarchs like it or not.

As a cryonicist, it’s within possibility that at some point, I’ll be resurrected in the form of a software emulation of a scan of my original brain – which would, potentially, allow me to make multiple copies of myself. The most obvious reason to do so is to spread versions of myself as widely as possible, to maximize the odds that at least one of me will survive any disaster that befalls all the others. One way to help improve the odds of that happening is to decide, before I’m copied at all, to make a Pfand (‘pledge’) between all of my selves: a pledge that each one of me will help any other of me, as far as is reasonably possible, knowing that all the other mes will be willing to do the same in return.

But, if this happens, there arises the possibility that the incentives involved will mirror those of present-day and historical oligarchs, and my future distributed self will be faced with choices which benefit myself at the expense of society, which would make society unhappy with me, which would cause society to impose measures that impose costs on me, which would reduce the odds of my long-term survival. Antitrust laws take on a whole new meaning when a ‘breakup’ means some parts of your future self are forbidden from helping other parts! So I’m trying to figure out, in advance, if there are any simple rules-of-thumb that I can figure out with the evidence that’s already available to me, which would be useful in giving my future selves the best odds. The closest I’ve currently managed is “Ensuring the system of merit-based competition works well is more important than winning any particular contest”… though, of course, that’s an extreme simplification, doesn’t apply in every situation, and has all manner of other caveats. But, given that the amount of data on pure competition of software entities is rather sparse, it may be the best I’ll be able to manage until some better evidence becomes available.

This isn’t just a meaningless eccentricity (though it’s certainly eccentric). I really and honestly would prefer not to know what the people who write what I enjoy reading look like – and for them not to know what I look like.

Humans evolved in close-knit small scale societies, where everyone knew everyone else; and this has led to the development of all manner of cognitive shortcuts which enhance individuals’ success in such a setting. However, those same cognitive shortcuts don’t necessarily lead to the best possible judgments in today’s societies. Just like a chocolate bar containts enough fat and sugar to superstimulate the brain’s reward centers, which is part of a process that can lead to obesity and health problems, Hollywood faces contain enough symmetry and regularity to superstimulate other parts of the brain, which can be part of a process that leads to other problems. Similarly, a parallel can be made between basing one’s diet on taste versus nutrition compared to basing one’s interactions on appearance or content.

Fortunately, there are a few existing social models in which appearance is entirely irrelevant. One of my favorites is the classical hacker culture, such as the variant descended from MIT’s model railroad club, in which all that really mattered was the quality of your code, not your hair style, ethnic background, or accent. This particular model has carried on into the present-day open source community, at least to a degree; and can be found in other online communities where people tend not to share photos of themselves, such as Second Life or discussion boardes.

However, even given all of that, there are /some/ benefits to having a visual identity which is memorable to the reader. After all, the human brain /does/ have a lot of wiring to deal with faces, and it would be inefficient to avoid using that wiring for positive effect, where it’s possible to do so. I’ve tried a few such styles over the years, such as a text-based logo, or the basilisk ‘The Parrot’; but have, as of October in 2011, settled on the cheerful, cartoony image drawn for me by Miss Critter, which is based on the blue-haired rat character I use as my online alter-ego (who herself has developed into a fictional character with her own full-fledged science-fiction universe): http://www.datapacrat.com/iconrat.jpg .

Using this face offers me just about everything I want from a visual identity, and just about nothing I don’t. It’s served me well the last year and a half; I suppose we’ll just have to see how long it lasts, until either I find something that does the job even better or my priorities change.

Economic score: -4.39
Social score: -9.57
Your score pegs you as economically moderately leftist and socially far-leftist.
Moderate economic leftists generally support regulation of free trade and business to assure that workers are fairly treated and prices remain stable.
Social far-leftists generally believe that the government has no business enforcing morality on most matters, instead favoring a government that intervenes only when absolutely necessary to avoid direct harm. Many social far-leftists also look negatively on the government’s past attitudes toward groups they view as persecuted, although some simply oppose government intervention on utilitarianist grounds.

—

My result from World’s Smallest Political Quiz:

Your PERSONAL issues Score is 100%

Your ECONOMIC issues Score is 30%

Liberals usually embrace freedom of choice in personal matters, but tend to support significant government control of the economy. They generally support a government-funded “safety net” to help the disadvantaged, and advocate strict regulation of business. Liberals tend to favor environmental regulations, defend civil liberties and free expression, support government action to promote equality, and tolerate diverse lifestyles.