Servitors, Minds and Maxwell’s Equations

by Mike Sententia on December 8, 2011

The difference between a real explanation and a curiosity stopper.

An enormous bolt of electricity comes out of the sky, and the Norse tribesfolk say, “Maybe a really powerful agent was angry and threw a lightning bolt.” The complexity of anger, and indeed the complexity of intelligence, was glossed over by the humans who hypothesized Thor the thunder-agent. To a human, Maxwell’s Equations feel much more complicated than Thor.

The human mind has special modules for simulating other minds. We needed them to understand tribal politics — keeping track of friends and enemies, knowing who to trust, etc. That module lets us unconsciously simulate anything as a mind with its own desires and goals, whether it’s Thor, water “wanting” to flow downhill, or the Coca-Cola corporation “deciding” what to sell.

When we hear an explanation involving an intentional agent (that is, someone or something that acts with an intent), we use that mind-simulating module. It’s unconscious, so we don’t realize how complex “Thor” is. In general, explanations that invoke intentional agents feel simple, and feel like a very likely explanation, even when they’re incredibly complex and incredibly unlikely.

I can’t tell if this flaw in human reasoning is immediately obvious. If it’s not, read this article from my favorite philosophy of science blog, then come back for the magick discussion.

I’ve been reading about servitors because I’m thinking of renaming “systems” as “universal servitors.” (Also considering “etherial software” and “intelligent forces”). ThearticlesI’vereaddescribeservitors as intelligent, causative agents. Essentially, servitors are minds. You make a servitor by focusing your mind on what you want the servitor to do, imbue it with life, and send it out to do its job.

Say that out loud and you’ll feel like “How do servitors work?” is an answered question.

But try to break each step down into its constituent parts, then simulate that all in your mind, like you would a series of chess moves, the operation of a car engine, or the execution of a piece of software. I can’t do it. I can’t go from “the servitor is an intelligent agent” to a step-by-step explanation of what it does, any more than I can go from “Thor is angry” to Maxwell’s Equations.

Invoking a mind produces a curiosity-stopper, rather than a path to a systematic explanation of how magick works.

Does that matter? Well, if you just want to produce magickal results using standard techniques, then a curiosity-stopper is fine. But if your goal is to understand how magick works under the hood and create a magickal equivalent of Maxwell’s Equations, then you need to be hungry for real answers, not fake-satisfied with a curiosity-stopper.

Note: Quick post today since I’m working on a series on the essence of direct magick, which hopefully starts next week.

Another related article: Richard Carrier defines “supernatural” as any explanation where a component is inherently mental, as in “this non-physical mind did something.” Makes a lot of sense to me, and puts into words some of my intuitive distrust of explanations that invoke minds.

You say – “I can’t go from “the servitor is an intelligent agent” to a step-by-step explanation of what it does, any more than I can go from “Thor is angry” to Maxwell’s Equations.”

No, but that’s not the correct comparison, surely? It’s not Maxwell’s equations that are the starting point. Rather, we go from the fundamental idea of “elementary electric charge” to the equations and then to the lightning bolt explanation*. One could imagine proceeding similarly to and from a fundamental idea of “elementary mind aspect” to an account of more complex mind-like systems, such as servitors. (Of courser, only very complex systems would be human-mind-like of course, capable of self-reflection and so on.)

*Electric charge is of course a classic “curiosity stopper”, but then there is always one at some point along the explanatory chain.

Even knowing the answer, I cannot lead someone from “Thor is angry” to Maxwell’s equations. You simply don’t get here from there. I could not say, “Here’s a refinement on your belief.” I’d have to start by dissuading them from their belief before we could make any headway.

But maybe when you say, “elementary mind aspect,” you mean something like a transistor or a nerve. A building block of a mind, which is not itself a mind in any normal sense of the term. I think that’s a great thing to explore. It’s actually an open question for me: What is the basic information-processing unit of ethereal software? I have connections and energy at the very-simple end, and spirits and ethereal software at the very-complex end, but I don’t have a lot of the intermediate steps along the way.

In terms of curiosity stoppers, any term might be a curiosity-stopper, depending on the person. For me, when I say “ethereal software,” I think of all the interactions I’ve had with these forces, of how you find them and use them and program them. Those concepts make “ethereal software” a real answer, not a curiosity stopper. But I can imagine a reader who doesn’t have that personal experience, who hasn’t thought deeply about this stuff yet, being asked, “What drives that manifesting?” and answering, “Ethereal software,” without being able to expand on that answer. For him, those words would be a curiosity stopper.