Pages

Tuesday, November 30, 2010

Hurray for updates. Busy this week, don't expect an evaluation on Monday (standard "unless I get really productive on Monday itself" disclaimer applies)

So, Jack is now officially Jack, by which I mean I have decided that yes, that'll be his name, or at least what people call him (not sure what his birth name is, which you'll notice is often something I tend to omit). Though he has another weird nickname in the community. I couldn't resist. Anna has been mentioned by name as well, by now her place in what's to come is clear, even if "what's to come" is not. From the writing, at least, I actually do have a pretty decent idea of what the big event is in my head, but it takes a bit more background to show it. Probably in the next scene

Wrote and rewrote a part, which was sounding quite awful. It's somewhat better now, but I'm still not happy with it (I never am, with dialogue). Eventually I'll get the hang of it. Eventually. Maybe.

What's coming up is a pretty big part I'm not sure how to write, it's kind of something the plot hinges on so I don't want to simply handwave it away, but it's tricky to find a decent explanation. Still, I know I'll be much happier with myself and with the story if I can make it work, and it shouldn't be that hard. I'm feeling confident.

So... um, characterization, I suppose is coming along, maybe somewhat not really. Jack, possibly. Anna, she's just appeared, and her character has undergone much rewriting I'm still not sure what will show up in Golden Sky and what died with Under the Surface. So I have to work on that. This next part, again, pretty much key for that.

As for the rest of the plot, well, it's not entirely clear right now, but I know the rough aspects of it. Like I keep observing, writing is so much easier when I have an ending to work towards, instead of a beginning to work from. And Golden Sky's final scene just so happens to be its introduction, so I know the ending quite concretely. All a matter of getting there, now, and I can pull that off. My life is gonna get rescheduled quite a bit staring next Monday, so writing time is unpredictable, but if it all goes well I might be close to finishing this. But I'm not getting my hopes up, just yet. As always, we'll see.

Saturday, November 27, 2010

And I'm sure at this point you're thinking of all the typical objections like seeing loved ones die and bogged down memory and boredom and whatnot. But what If I said this immortality extended to your family and friends, hell, the entire human species and any other sentient beings we might find in the process. And we'll improve everyone's bodies and brains, so memory won't be a problem, either.

And what about overpopulation? Well, that would take billions of years or longer, but I suppose eventually the universe might fill up. And what about the heat death of the universe? No physical system would be able to work, and you probably know I'm a naturalist, so any immortality would have to be physical. But let's say we find a way around those with new discoveries allowing us to create new universes or whatnot.

But maybe there are other problems left. Or maybe just that doesn't work. So now what?

Well, it's a simple principle that I've been thinking about lately. Certainly there are many problems that we avoid due to our limited lifespans. You've heard people say, often enough, that they don't care about global climate change since they'll be dead, that the sun becoming a red giant in a few billion years doesn't concern us because we won't live long enough, hell, people saying they'd rather die young than get old and sick. This simple principle says: When you avoid a problem because you won't live long enough to face it, you don't have a solution, you have another problem.

I know not everyone is as passionate as me about living as long as possible. I was recently surprised at how many people told me they wouldn't want to be brought back to life were such a thing possible (the science of internet polls). But this idea goes beyond that. I admit eternal life has issues, but those are the issues of life itself, we just don't have enough time to face them now. Like a baby hoping she'll die at age five and never have to face school.

When someone wants to kill themselves, we usually think something must be wrong, and we would want to fix that if possible. Those of us who believe in the right to euthanasia still think it'd be a better outcome if we could cure the disease that's causing the suffering. So why don't we extend that thinking pattern indefinitely? Why do people talk about some "natural extent" of human life, after which it'd be silly to still want to live? Why not focus on the problem of life not being worth living at a certain point?

My argument is: If you don't exist, you can't achieve goals, you can't be happy, you can't experience pleasure, you can't rack up utility points. Of every way humans use to evaluate outcomes that I know of, in none does death ever become the most desirable outcome conceivable. I accept the existence of fates worse than death, but not their theoretical inevitability. That is, there's always a conceivable something better than death. If life is looking worse than death, then either your perceptions are wrong or life is not living to its theoretical potential. In both cases, we have a problem.

The problem might not be solvable. Maybe immortality does bore you eventually, regardless of what you do. But it's still there. It's still preferable that it wasn't there than to die. Therefore, you have a problem to solve, if you want to achieve the best possible outcome, whatever that is for you If you're going to die, you have two problems to solve, one is your death, the other whatever sucks in life. If there's something else down the line, then you have three problems. Or four, or five, or six. And every new possible way to die adds another problem. In a way, you have infinite problems, sorry to break it to you. But I'm telling you because I think it's better if you know.

If you truly, really, honestly think that the best possible thing that could happen to you is for you to day after a certain point, then there's nothing I can say. But I don't believe any human has an utility function that prefers death to all possible states of not-death. Or however it is you think preferred outcomes are determined if you don't like utilitarianism. So, next time you think of the future, think if you want to be there. If not, find out why. Knowing what the problems are is a good way to start solving them.

As an aside: I don't believe eternal life is possible. Like I said above, I' think naturalism is correct, life cannot exist other than as a physical system. And even if the heat death of the universe can be bypassed, somehow, you'd have an eternity of time for that life to end. If there's any possible way for death to happen, it will happen, probability 1, given an infinite amount of time. I might be wrong. I want to be wrong about this, provided we can solve the other problems. But it doesn't seem likely. That doesn't negate any of the points I raised before, a problem you cannot solve is still a problem.

Monday, November 22, 2010

Welp, this is coming along.. Somewhat. It might go faster once I get more free time, which should be somewhere about two weeks from now. Regardless.

Right now I'm working on our nameless narrator's backstory, to whom I'll provisionally refer to as Jack. I'm not sure that'll be his name, I'm not even sure if he'll be a he, but Jack works for now. Anyway, it's one I've worked with in many variations over the years, so I'm happy to at last write it down for someone. Though it means I won't be able to use it anymore. Or at least not as much.

I'm still unsure about one key aspect of the plot, but I can't find any alternatives. I don't think they are possible. Sure, a few variations here and there, but the core is still the same. So I'll just have to give it my best shot at that and hope it doesn't suck. Other than that, I'm working on how Jack meets Anna, which is in a sense the beginning of the story proper, instead of the Jack's backstory (Anna's backstory being mostly Graduality and possibly referred to in passing in the meeting).

I have to flesh out the Illuminated interactions a bit more, since it's at the core of the plot (Illuminated are people like Jack and Anna, you'll find out about them eventually). Specifically why it's so chaotic, which is a new development but does help explain the conflict between Jack and Anna, on two different levels. Right now It's kind of handwaved away, I'll have to work on a real reason at some point, will make the story more satisfactory. Probably.

So that's all I have for now, see you in a week if I wasn't swallowed by studying.

Monday, November 15, 2010

You might be looking at the title and wondering what the hell is up. Then again maybe not, because I doubt I have "regular" readers. In any case, before last week's hiatus, I was working on "Under the Surface", sequel to Graduality. Well, I got an Idea, so that changed. Two Ideas, actually, the first being the basic premise of Golden Sky and the second that it could easily be adapted into a sequel to Graduality, scrapping the much worse thing I was working on. It might eventually be revived, but as of now it's dead. Anna is no longer the protagonist of this work, or at least not the sole protagonist. You'll see when it's done.

Right now the intro is written and soon I have to start the big flashback to how it all starts. The intro is set right before the aforementioned Golden Sky, the plot deals with how it that happened. It might be split into two works before reaching the Golden Sky point, depending, and then it might continue

I'm a bit iffy about writing an older Anna, mostly because I had timed Graduality so she'd be a teenager by the time of the sequel set in 2010. I don't like writing a plot set in the near future, that's the kind of thing that ages badly 10 years down the line when it turns out the Internet was replaced with chewin' on skunks and cars are considered a crime against humanity for their pollutants. But I really wrote myself into this corner, and I'm not in the mood to do historical revisionism on Graduality, or try to adapt the plot to work with a 15-year-old, so I hope you skunk chewers can forgive my eventual anachronisms.

I'm happy about this new development, hopefully I won't be needing to restart this time. It hits far closer to where I aimed originally with Anna as a character, plus also getting the "fun to write" points from the replotting, which adds up to me being far more excited about this than either of the last two versions. I'm still not sure what to do with that plot, maybe let it die, but it could be saved by another character if I figure out what I want to do with it. Maybe. In any case, we'll see.

Wednesday, November 10, 2010

Did you know that the word Sigmalephian has no Google hits as of this writing? You might wonder why I would bring that up (or just assume I'm crazy and/or and idiot, hypotheses I cannot discard). Well, one of my usual Internet aliases is Sigmaleph, which I'm quite fond of. And "Sigmalephian" seems to be a good word to describe something relating to my person, much better than say "Currentian" or "Mirrassian". Or, gods forbid, my true name, which I must keep hidden lest I grant you mystical powers over my person. In an act of convenient labelling and tautology, I have decided to declare I belong to the Sigmalephian school of philosophy. That is, that whichever my thoughts on any subject, it just so happens that they match the thoughts of this Sigmaleph character, which, as luck would have it, is myself. Does that make sense? It shouldn't.

All of the above is just actually irrelevant to the matters originally prompting me to write this post, I just felt I needed to get that out there (here) at some time and this felt like a good opportunity. The following is indeed Sigmalephian philosophy, but then that's true quite a lot on this blog, and remarking upon that fact has never been necessary or useful for the reading of my mental excretions.

You're still here? Huh. 20 SigPoints for persistence. Since SigPoints cannot be exchanged for anything as of now and for the foreseeable future, your true reward is my rambling. Aren't you excited? Well, so it goes.

One thought that has repeatedly happened upon me is that the basic benefit of good is cooperation and the basic benefit of evil is resourcefulness. Which is to say. On the purely pragmatic aspects and ignoring for now self-image and warm fuzzy feelings, "good" agents have as an advantage the fact we live in a world with other good agents and they are more willing to cooperate with others like themselves. The basic weakness of the murderer is that zie doesn't go against the detective, zie goes against the detective backed by the police department supported by a large part of society. And, the advantage "evil" agents have is that they are have more methods available to them. If there are two different ways to solve a problem and one involves kicking puppies, the evil agent will be able to choose based on their relative usefulness, whereas the good agent has the disadvantage of having to also factor in the ethics of puppy-kicking. This doesn't cut both ways, since the evil agent has no particular reason to prefer evil methods to non-evil ones that work better. A decision algorithm that only maximises strategic merits will on average outperform the one that has to balance strategy and ethics.

Where am I going with this? Well, you might notice that the "evil" advantage is intrinsic to evil agents, whereas the good advantage is beneficial only when there's a perception of goodness. That is, any agent who cares less about ethics than the adversary has the advantage of more options, but good agents that don't reap the benefits of the goodness advantage can exist. What you need is other good agents to think you're good and help you, which can happen independently of goodness. Which brings us to the problem. An evil agent can reap both benefits if it is evil but perceived as good. The reverse does not happen, indeed it kinda sucks to be good and perceived as evil, because you get none of the benefits.

As a brief parenthesis. Yes, this is a simplified model, and I'm not addressing what "good" and "evil" are, which is a pretty deep problem. For the purposes of this model, "good" and "evil" are what the society in context thinks they are. This is not synonymous with actual good and evil (as I understand them), but it's usually close enough in most cases. The whole "murder is usually considered bad among humans" thing. Other simplifications are that it ignores the self-image, conscience and intimidation factors, and possibly others, which are not minor, but don't tip the scale far enough, usually. Bottom line, I think the model works for most cases. I welcome any improvements that keep it simple. But first, read the rest, because there's one major flaw I correct later on.

Onwards. So, imagine an evil agent who thinks zirself very smart. So smart, zie considers zirself able to trick most good agents into cooperation, while still using evil tactics. And thus, the incentive for goodness is gone. Problematic if you want people to not be evil, which you do being a good agent (and if you weren't, you wouldn't tell me, now would you?). Note that even if the evil agent considers zirself to be good, zie can believe most people are mistaken, and thus still want to trick people, because the advantage is in being perceived to match society's idea of good. It's close enough to true that nobody sees themselves as evil, but people can certainly see themselves not matching the general idea of good, or think that everyone is making such a fuss about that minor thing of killing [insert group here] who aren't really people. Or whatever. Addendum noted (no, this is not the major flaw I hinted at), moving on.

Well, at this point I started to consider solutions to the problem. One noticeable thing is that it shows the appeal of an impossible to trick good agent handing out significant punishments and rewards . Impossible to trick so there cannot be a false perception of good, good to make sure it only cooperates with good agents, and the rewards and punishments have to be huge to outweigh any possible advantage of evil. The idea of the omniscient, omnipotent, benevolent god, in other words. Not a stunning discovery, of course, but it put the ball in a more familiar court. Since I'm fairly used to considering why gods are not good answers to questions, that part of my brain engaged quickly, and I noticed my big oversight.

A general principle to consider: In most cases, if believing X is beneficial and X is false, there should exist a true belief Y that delivers the same benefits. Y also should explain why X is beneficial, but that's tangential to the point. In the universe we live in, the power of knowledge is in the ability to make better decisions. When you're deciding based on incomplete knowledge (i.e. the situation every human being is whenever making a choice), the decision based on knowledge closer to the truth should, on average, outperform the others. There are beliefs that have effects not related to knowledge, like say placebo effect and such, but they are not the predominant case. Which adds up to, you should want to be right. When you find yourself in a situation when you want people to be consistently wrong to make better decisions, there's probably something wrong with your "right".

What I was wrong about, rather obvious in retrospect, is that good agents cooperating better is not purely a matter of being more willing to do so given the perception of goodness. Good agents cooperate better, in part, because of the characteristics of "goodness". That's how goodness came to be in the first place, if there was no advantage to it then it wouldn't have been selected for, the primitive good agents would've lost the evolutionary game to those without goodness. And, separate but more important, it's the deeper why behind good agents wanting good agents. The more good agents a society has, the better it will do, outweighing the advantages of increased resourcefulness due to evilness. Otherwise, it'd be irrational to want a good society, and I'm trying to show the opposite.

In the end, it all adds up to that, while it might seem that a pragmatist might simply try to fake goodness and reap the benefits of both evil and good, in the long term that's a poor group strategy. People should want to cooperate, not because of fear of a false punishment, not just because that's who they are (though that plays a significant part in what good is) but because it works at group-level.

Which brings us to the second part of the problem, the individual perspective. You might notice that while it's better for the society for all its agents to be good, for each individual agent it still seems preferable to be evil and perceived as good, getting the benefits without the drawbacks. Of course, this individual perspective results in society collapsing. It's the Prisoner's Dilemma, all watch out only for themselves so it adds up to the worst global situation. Which, once again, rings that little bell in my head that says that if your "smart" strategy has consistently worse results than the "stupid" strategy, then it can't be that terribly smart.

One answer is that a truly smart society should be hard to trick. Not omniscience, that's beyond human means, but it seems a necessary application of intelligence is detecting concealed evil and thus acting as deterrent. That's, like I said, one answer, but I don't think the best. Creating agents that want to be good is more efficient if it works, but also more difficult. I'd be wary of genetically modifying humans, for example, while theoretically it could be very useful there's many ways it could go wrong. But, while it still seems that better answers should exist, the thing I'm happy about is that at least I managed to get to a answer that shows a smarter society works better, not worse.

Tuesday, November 2, 2010

Bah, I hate this part of the writing process. When I'm struggling to find what goes next so I'm not inspired enough to write what I do know, so it takes weeks to finish scenes. Some day I might find a solution for that.

So right now I'm just trying to mesh the old plot with the new plot, which is not that difficult (there wasn't much "old plot" to begin with) but does require a bit of reconsidering what I already wrote. I think I solved the specific problem I had in mind, but, as mentioned above, I'm not terribly motivated right now, so it keeps getting delayed. Wrote one paragraph since last Monday, and another might have to be deleted, so it's a net gain of zero (or thereabouts) in terms of words written, and one detail in terms of plot-in-my-head. Bad. I should be working harder, and I will... but not this week

Something else is taking priority until next Monday, but from then on I should have more time to get stuff done. Or not. But probably yes. So, anyway, don't expect an evaluation next Monday unless I get really creative on Monday itself, which might happen but I'm not counting on. As always, we shall see, won't we?