To understand alpha theory, you have to learn some math and science. To learn math and science, you have to read some books. Now I know this is tiresome, and I am breaking my own rule by supplying a reading list. But it will be short. Try these, in order of increasing difficulty:

Complexity, by Mitchell Waldrop. Complexity is why ethics is difficult, and Waldrop provides a gentle, anecdote-heavy introduction. Waldrop holds a Ph.D. in particle physics, but he concentrates on the personalities and the history of the complexity movement, centered at the Santa Fe Institute. If you don’t know from emergent behavior, this is the place to start.

Cows, Pigs, Wars, and Witches, by Marvin Harris. Hey! How’d a book on anthropology get in here? Harris examines some of the most spectacular, seemingly counter-productive human practices of all time — among them the Indian cult of the cow, tribal warfare, and witch hunts — and demonstrates their survival value. Are other cultures mad, or are the outsiders who think so missing something? A world tour of alpha star.

Men of Mathematics, E.T. Bell. No subject is so despised at school as mathematics, in large part because its history is righteously excised from the textbooks. It is possible to take four years of math in high school without once hearing the name of a practicing mathematician. The student is left with the impression that plane geometry sprang fully constructed from the brain of Euclid, like Athena from the brain of Zeus. Bell is a useful corrective; his judgments are accurate and his humor is dry. Lots of snappy anecdotes — some of dubious provenance, though not so dubious as some of the more recent historians would have you believe — and no actual math. (OK, a tiny bit.) You might not believe that it would help you to know that Galois, the founder of group theory, wrote a large part of his output on the topic in a letter the night before he died in a duel, or that Euler, the most prolific mathematician of all time, managed to turn out his reams of work while raising twelve children, to whom, by all accounts, he was an excellent father. But it does. Should you want to go on to solve real math problems, the books to start with, from easy to hard, are How To Solve It, by Pólya, The Enjoyment of Mathematics, by Rademacher and Toeplitz, and What Is Mathematics? by Courant and Robbins.

A Universe of Consciousness, by Gerald Edelman and Giulio Tononi. A complete biologically-based theory of consciousness in 200 dense but readable pages. Edelman and Tononi shirk none of the hard questions, and by the end they offer a persuasive account of how to get from neurons to qualia.

Gödel’s Proof, by Ernest Nagel and James Newman. Undecidability has become, after natural selection, relativity, and Heisenberg’s uncertainty principle, the most widely abused scientific idea in philosophy. (An excellent history of modern philosophy could be written treating it entirely as a misapplication of these four ideas.) Undecidability no more implies universal skepticism than relativistic physics implies relativistic morality. Nagel and Newman demystify Gödel in a mere 88 pages that anyone with high school math can follow, if he’s paying attention.

Incidentally, boys, for all of the comments in the alpha threads, one glaring hole in the argument passed you right by. It’s in the Q&A, where I shift from energy to bits with this glib bit of business:

Still more “cash value” lies in information theory, which is an application of thermodynamics. Some say thermodynamics is an application of information theory; but this chicken-egg argument does not matter for our purposes. We care only that they are homologous. We can treat bits the same way we treat energy.

I think I can prove this, but I certainly haven’t yet, and my attempt to do so will be the next installment.

A different Universe is the most relevant book on this topic I have read. John Holland is a better read to understand complexity, I take Douglas Hoffstadters recomendations seriously, if not his name’s spelling. Some understanding of math is important to understand thermodynamics, but I am not as sure, having read about emergence and all the stuff from A Different Universe, that this understanding is as neccessary as I once thought.

To expand a little: the reason I agree with you already is because I’ve been immersed for the past several months in digital systems where electrical energy is indeed treated as 1s and 0s, and in networking where signal entropy is a very real problem. Might An Introduction to Information Theory by J.R. Pierce make a good addition to the reading list?

Also: as much as Godel’s theorems fascinate me, how exactly does that tie into alpha?

Pierce’s book might be worth adding; I will read it and let you know. There are many books worth adding, but a long reading list is worse than none at all.

As for Gdel, the limits of axiomatization and deduction bear pretty directly on alpha theory. It was Gdel who first explained why ethics is hard. It’s all about Poisson events. Complex systems by definition exhibit behavior that cannot be deduced from examining their components their axioms, as it were. They exist in nature (the weather) and we create them too (the market). The only way to figure out how such systems behave is to simulate them. If his theorem were known as “irreducibility” instead of “undecidability” people would understand it better and fly off the handle less.

The relevance of all this will become clearer when I discuss consciousness. If it were possible to deal effectively with the world through sheer deduction our brains would be constructed a lot differently. Edelman and Tononi argue that there are two principles of human thought: logic, analogous to axiomatization; and selectionism, which reaches truths that deduction cannot. We rely, are forced to rely, on selectionism because there can be no such thing as a universal theory to deal with Poisson randomness.

Actually, Gdels two theorems dealing with the axiomatization of arithmetic should be called incompleteness, as indeed he named them. Speaking of incompleteness, Rebecca Goldsteins new book by that name gives a good, though non-rigorous explication of the two incompleteness theorems as well as Gdels first, completeness, theorem. The book is really worth reading, in my opinion, for Goldsteins excellent account of how Gdels work should, and wasnt, treated by the philosophers, most notably Wittengenstein. If you care about the philosophical issues at allor the personalitiesread what Gdel wrote (in letters he didnt send) about Wittengenstein, which starts on page 113, and what Wittengenstein said about Gdel, starting on p. 188.

Also, I cannot reccomend enough John Holland’s books on complexity. So much better than waldrop. A Different Universe: Reinventing Physics From the Bottom Down’s opening:

THere are two conflicting primal impulses of the human mind–one to simplifyone to simplify a thing to its essentials, the other to see through the essentials to the greater implications. All of us live with this conflict and find ourselves pondering it from time to time. At the edge of the sea, for example, most of us fall into throughtfulness about the majesty of the world even though the sea is, essentially, a hole filled with water. The vast literature on this subject, some of it very ancient, often expresses the conflict as moral, or a tension between teh sacred and profane. Thus viewing the sea as simple and finitem as an engineer might, is animistic and primitive, whereas viewing it as source of endless possibility is advanced and human.
But the conflict is not just a matter of perception: it is also physical. The natural world is regulated both by the essnetials and by powerful principles of organization that flow out of them. These principles are transcendant, in that they would continue to hold even if the essentials were changed slightly. Our conflicted view of nature reflects a conflict in nature itself, which consists simultaneously of primitive elements and stable, complex organizational structures that form from them, not unlike the sea itself.
The edge of the sea is also a place to have fun, of course, somthing to keep it is good to keep in mind when one is down there by the boardwalk being deep. The real essence of life is strolling too close to the merry-go-round and getting clobbered by a yo-yo."

At a garage sale last week, I picked-up about ten, or so, of the Scientific American Library volumes (mostly on mathematics, cosmology, and physics).

One volume, Mathematics: The Science of Patterns, by Kevin Devlin, started promising enough, but early into it I realized that the monograph paled in comparison to Dunham’s Journey through Genius noted above. Devlin’s monograph had many tables and figures, but very little of the actual proofs or verficiations of the material.

The resultant effect of this was to give little real insight or understanding as to why or how the mathematics, as presented, was invented or evolved as it did.

Blech.

My point?

There is no substitute for original source material, and if the source material is not available, be sure to find a reference that includes the original source material as is possible.

IIRC, that explanation isn’t easy and N&N were wise to omit it. I believe Godel had planned for a Part II to give a detailed proof for this claim but never wrote it. Hilbert and Bernays wrote a proof in their Grundlagen Mathematik but it’s always omitted in every reference–everyone claims it’s too tedious to reproduce.

Our goal in this paper has been to formalize the classical ‘T’ as ‘[]’, so that instead of proving A classically, we prove instead []A modally, where A is a wff. To successfully realize this aim we need a conservation result, namely, that this approach proves no classical formula that is not also provable classically.

This is another interesting book, which for some here, may fit nicely into their reading list. I started reading it several weeks ago but am ready to get back into it.

One last point: Cantor was born in Russia, of a Danish father and Russian mother, but lived most of his life, and his mathematics work in Germany. Does this make him a German mathematician?

Just askin’.

And Tommy,

The reason to read about Gdel, his times, his other ideas, his work’s impact or the reaction others had to it, is that if we know more about these subtexts, or aspects of some narrative arc, the better, hopefully, we can understand and appreciate the man and his work.

A book which ought to figure on that list is Penrose’s Magnum opus "Road to Reality". It works as a layman’s "Physics handbook" but with a narrative, taking the reader on a journey from basic geometry to modern day cosmology. The mathematics, although extensive in comparison with other popular science books, is used sparingly but yet in sufficient quanitities to allow the interested reader to muster the fundamentals. In a way, Penrose hits the nail. Mathematics, as well as most other subjects, has two dimensions of depth. The ‘creative’ problem solving dimension, and the rote-learning of mathematical concepts. Thus, as long as the presentation is simple, complex analysis offers no problem for the layman, who just glimpsed the fundaments of real analysis for the first time in preceding chapter. Not to frighten off too many prospective buyers, Penrose also asserts that the book can be read 100% formula-free.

On your point: Goldstein’s book is topic relevant, in that she guides us, in part, through the proof, but more importantly, as above, she puts the proof into context.

And again, why Gdel?

Per Aaron:"…the limits of axiomatization and deduction bear pretty directly on alpha theory. It was Gdel who first explained why ethics is hard. It’s all about Poisson events. Complex systems by definition exhibit behavior that cannot be deduced from examining their components their axioms, as it were."

Can the limits of axiomitization lead us to models and understanding of consciousness? I don’t know, but I believe that Aaron and Bourbaki are on to something.

Alpha theory, in its current form, directs us toward an energetic model of ethics. But within that model, there exists several different and seemingly distinct (non-interchangeable) measurable states. Can S and some information quantity (bits, qualia) be reconciled within the same formula?

Back to Aaron’s answer above:"The only way to figure out how such [complex] systems behave is to simulate them."

In other words, our ability to find an understanding of consciousness, and then perhaps human behavior, lies in our ability to mathematically model consciousness, which per Gdel, limitations in those models lie in the general limitations of axiomatization.

Funny you should mention Penrose. I just finished reading The Emperor’s New Mind last night, and I refuse to read The Road to Reality unless Penrose has gotten a much better editor since then.

Penrose is smart but very frustrating — there are lots of lucid nuggets in the book, but you have to wade through nearly 600 pages of his verbose meandering to get them all, by the end of which he hasn’t actually formed any kind of convincing case. He just talks and talks and talks, and doesn’t seem to have very good judgement about what to dwell on and what to touch briefly on or omit. I think the book could have been condensed to under 400 pages and still hit all the important bases.

Besides, I don’t think I have the patience to wade through another thousand pages of exclamation mark abuse. On every page, there are at least two sentences like this! (And at least one in parentheses, like this!)

"In any case, the interesting conjecture is that there appear to be only two deeply fundamental ways of patterning thought: selectionism and logic."

Godel revealed the limitations of logic. This limitation influenced how alpha theory was derived.

And also,

"While these attempts give due scientific recognition to the subjective domain, subjectivism itself is no basis for a sound scientific understanding of the mind. Consequently, we reject phenomenology and instrospectionism, along with philosophical behaviorism.

…

The history of science, particularly of biological science, has shown repeatedly that apparently mysterious or impassable barriers to our understanding were based on false views or technical limitations. The material bases of mind are no exception."

Although I have not read ‘A Road to Reality’, the reviews indicate that it is an excellent treatment of the subject. However, I was disappointed with Penrose’s attempt to attribute consciousness to quantum gravity in The Emperor’s New Mind.

"Penrose proposes that the physiological process underlying a given thought may initially involve a number of superposed quantum states, each of which performs a calculation of sorts. When the differences in the distribution of mass and energy between the states reach a gravitationally significant level, the states collapse into a single state, causing measurable and possibly nonlocal changes in the neural structure of the brain. This physical event correlates with a mental one: the comprehension of a mathematical theorem, say, or the decision not to tip a waiter."

"There are no completely separate domains of matter and mind and no grounds for dualism. But obviously, there is a realm created by the physical order of the brain, the body, and the social world in which meaning is consciously made. That meaning is essential both to our description of the world and to our scientific understanding of it. It is the amazingly complex material structures of the nervous system and body that give rise to dynamic mental processes and to meaning. Nothing else need be assumed–neither other worlds, or spirits, or remarkable forces yet unplumbed, such as quantum gravity."

My thoughts exactly. Penrose never really convincingly drives his case home, it’s just anticlimactic speculation at the end. Since then, Max Tengmark has pretty convincingly killed the idea that quantum effects significantly affect the brain. Penrose’s central point about the limited usefulness of algorithms was lucid and dead on, though.

Much as I love Popper, he did believe a couple of rather silly things. His insistence on dualism and indeterminism always baffled me, not to mention his reluctance to admit Darwinism into the realm of science.

You’re welcome. I’d be baldfaced lying if I claimed to be able to follow all the technical specifics of the paper, but the gist of it is clear. When we’re talking about a timescale difference of ten orders of magnitude or more between the quantum decoherence and neuronal firing, there’s not a lot of room for Penrose’s quantum magic and mystery to enter into things.

While we’re on the subject, has anyone here read V.S. Ramachandran? If so, would you reccommend him?

If alpha is unit-less, what then are the units (if any) contained in F, and how are they reconciled. If there are no units, what then comprises F, in other words, on what does this function act, and is it energetically dependent?

I know I have asked this question before, but I am concerned that within F lies some potential problems.

Does F rely on qualia, bits, or someother "currency"? How does this "currency" maintain itself? How can we rely on its validility, i.e., its "truthfulness", and can we be sure that our "ethics", our "alpha" doesn’t require axoimatization of its own, when it comes to the foundations of F?

Here, from Tegmark:

"In practice, the interaction of Hint between subsystems is usually not zero. This has a number of qualitatively different effects:
1. Fluctuation
2. Dissipation
3. Communication
4. Decoherence
The first two involve transfer of energy between the subsystems, whereas the last two involve exchange of information. The first three occur in classical physics as well-only the last one is a purely quantum-mechanical phenomenon…
We will define communication as exchange of information. The information
that the two subsystems have about each other, measured in bits, is
I12 S1 + S2 − S, where Si −tr ii log i is the entropy of the ith subsystem, S −tr log is the entropy of the total system, and the logarithms are base 2."

This type of relationship is similar to what we find in Edelman and Tononi, but Edelman and Tononi only seem to account for the genesis of consciousness, not its specific content, and there seems to be a missing step (or steps) that leads to F.

In previous posts, Bourbaki argued that F is a filtration representating a mathematical function, but still it possesses information.

"While these attempts give due scientific recognition to the subjective domain, subjectivism…"

Is it wrong to say that Godel’s success was inherently born within him (at the moment of conception) and then to extrapolate something similar occuring onto those subject to his idea, even those who lack an consequential understnading of it? Isn’t that what
emergence is, when the parts alone cannot explain the whole (this is
in reference to the birth of understanding)

I read Godel Esher Bach a long time ago, so I don’t know if I fully understand him or his ideas, but it seems to me that if his topic relevance comes from the fact that he showed a limit to logic and that you are then saying that his limit is what leads us to this:

"While these attempts give due scientific recognition to the subjective domain, subjectivism itself is no basis for a sound scientific understanding of the mind. Consequently, we reject phenomenology and instrospectionism, along with philosophical
behaviorism."

it seems like topical complementary mutualism. There are many ideas that demonstrate the same criteria, if not to the same, ahem, degree or actuality then something of correlary significance. I guess you reading about Godel is one of the principles that inspired within you a seed of which emerged Alpha theory?

I don’t know though, I remember many posts back you said it took you months of staring at equations to figure it out, and you didn’t drop Godel then when I asked about that. Having admitted that I don’t know shit about him though,
I am not denying his importance so much as denying the need to read a book or two on him to understand alpha theory. Should I study the emergence of all thoughts that relate to alpha theory to understand alpha theory, if so, alpha theory is a very weak theyro (it should do this on its own without books worth of reference, especially when time might be better spent deriving flaws in its implications and pronouncements than
in continually trying to immerse the mind in it’s intellectual foundings… dun dun dun, because emergence has taught us nothing if it hasn’t taught us that doing this is not only counterproductive to a comprehensive understanding in many ways [the historical ideas can become too big and tangled, and even based on author’s predisposition contradict certain other athoritated assertions professed by ancillary subjets of study] but also becasue it can never fully account for what emerged from it)

In the back of my brain it seems to me that some of what Godel says actually undermines many ways we have tried to present alpha theory on this blog, and it seems like a correlary to Focaults attacks on structuralism. But, you have to consider many mathematical proofs and scientific laws as simple acts of emergence if you buy into complexity and emergence, and this is the most obvious reason why Newton’s laws break down at such a small level, and why quantum mechanics don’t get big. Or am I misunderstanding emergence?

"Godel’s great stroke of genius–as readers of Nagel and Newman will see–was to realize that numbers are a universal medium for the embedding of patterns of any sort, and that for that reason, statements seemingly about numbers alone can in fact encode statements about other universes of discourse. In other words, Godel saw beyond the surface level of number theory, realizing that numbers could represent any kind of structure"

you quoted hoffstadter here, but I wonder where he speaks of discourses is he referencing the type as exemplified and defined by Foucault? If so, it means somewhat less than what you might think.

"One of the greatest disservices we do to our students is to teach them that universal physical law is something that obviously ought to be true and thus may be legitimately learned by rote."

This isn’t how science is taught. And we continue with

"They are not fundamental at all but a consequence of the aggregation of quantum matter into macroscopic fluids and solids–a collective organizations phenomenon."

Followed by

"Astonishing as it may seem, many physicists remain in denial."

You’ve just stepped into a long running debate between particle physicists and condensed matter physicists. From a review:

Laughlin and Pines advocate the search for "higher organizing principles" (perhaps universal), relatively independent of the fundamental theory. I give them credit for emphasizing that many different underlying theories may lead to identical observational consequences. But they turn a blind eye to the idea that in many important physical settings, the detailed structure and parameters of the Lagrangian are decisive. They campaign as well for the synthesis of principles through experiment, which I also recognize as part of the way we do particle physics. I believe that the best practice of particle physics—of physics in general—embraces both reductionist and emergentist approaches, in the appropriate settings.

Overall, I am left with the impression that Laughlin & Pines are giving a war to which no one should come, because the case for their revolutionary intellectual movement is founded on misperception and false choices. Perhaps the best way for us to be heard is to listen more closely, try to understand the approaches we have in common, and—occasionally—to use their language to describe what we do. It is important for us to seek the respect and understanding of our colleagues who do other physics, in other ways.

This debate has no impact on what we’re discussing. And we’re not going to be able to participate in this debate based on a few quotes and papers.

We are not looking to unify the forces of nature.

Thermodynamics is one of a few mature fields epitomized by a well-defined, self-consistent body of evidence. The essence of the theoretical structure of classical thermodynamics is a set of natural laws governing the behavior of macroscopic systems. The laws are derived from generalization of experimental observations and are independent of any hypothesis concerning the ultimate nature of matter and energy.

"but I wonder where he speaks of discourses is he referencing the type as exemplified and defined by Foucault? If so, it means somewhat less than what you might think."

You’re going to have to be more specific. I have no idea what this means.

As F is a probability function, at what scale does it occur? In other words, does the probability of action occur at the Eustace level, brain level, neuronal subsystem level?

Do you follow where I am going with this?

Alpha is unit-less, and ultimatley it depends on a probability function. But information, as a "currency" depends on the function of neural substrates, that themselves have their own entropy and signalling characteristics beyond (or within) the alpha-system.

Again, there seems to be a need to account for information I, and its entropy. Furthermore, the "truth" of that information, although not axiomatic, seems to approach the need for axioms.

"but also becasue it can never fully account for what emerged from it"

isn’t this essentially what godel says that was so significant about him in the first place. if so, it is ironic that your understanding of him (that lead you to recomend him) actually made you miss the fact that in many ways we cannot conclude that this course of knowledge is not inherently flawed and even worthless to our overall understanding of alpha theory. Well, not worthless of course, the worth would just be incomplete.

maybe that was a bloody brilliantly incomplete tautology on emergence and

Perhaps I am perveting and misusing the idea? But also. really, your arguemtn for his relevance was weak also:

"Godel revealed the limitations of logic. This limitation influenced how alpha theory was derived."

I wonder if that is mere childish name calling. How might we derive an answer. I would say aplha theory, but you know, thermodynamics are hard to manage by meself.

Metoothen: hey, what if we assume that most coherence and confluence in math AND logic are simpy products of emergence. Wouldn’t F be emergent, i.e., to a small/large degree transcendant from it’s parts because it would and could remain the same despite different internal changes, i.e., an emergent result could be the same even if we changed several different component and supplanted them with other actions that had the same result, obviously. This is even what occurs during entropy when etropy represents the number of internal states that a body might take and remain on the outside unchanged. Look at my Hawking quotes from a few posts back for more on that.

Any Eustace is an alpha-model that reacts to events. The events to which Eustace can adapt can be no larger than F@t. In fact, they’re always much smaller.

When something "happens", energy flows. The scale is the resolution, either direct or indirect, of these events. These occurrences (all of them) become part of the filtration.

The standard references for this material are Karatzas, Chung and Oksendal but they’re definitely not for the faint of heart.

Information is a loaded term. From here on, assume that by information we mean signal or interaction.

We don’t care about the meaning of any signal–we’re dealing with information in the pure Shannon sense. Recall from your earlier post

Also, the distinction between the meaning and merit of meaning of the constant signs is tricky (well for me anyway). p.71

Shannon was wise to call his model a theory of communication. We’ll work a lot of the material into future posts as we integrate information theory. Part of the challenge is doing so without making any of these books required reading. We haven’t touched qualia yet–and can’t until we get through information theory.

If you’d like to jump ahead, I’ve heard good things about a A Probability Path by Resnik. I have his "Adventures in Stochastic Processes" which is decent but quite possibly the nerdiest book title I’ve ever seen.

A filtration F is part of the definition of a measurable space in probability theory.

A stochastic process is a model for the occurrence of a random phenomenon. The randomness is captured by the introduction of a measurable space (W,F), called the sample space, on which a probability measure can be placed. W is the set of all possible events, while F is the set of all events that have occurred.

In other words, F@t represents all available information up to some time, t. In our coin-flipping game, it represents your realized path of wins and losses @t.

The implied probabilities manifest in implied events. These events have thermodynamics consequences that can be measured to produce the dimensionless quantity, alpha.

I don’t know though, I remember many posts back you said it took you months of staring at equations to figure it out, and you didn’t drop Godel then when I asked about that.

Go back to Part I and look check out the comments for mentions of Godel. A lot of ideas have influenced the development of the theory. Godel was particularly important because it led to the choice of the first and second law.

Worse, for any dynamic system, we can not predict all possible external influences–the best possible crystal ball is a blurry, probabilistic one.

hey, what if we assume that most coherence and confluence in math AND logic are simpy products of emergence.

Tommy, you can assume anything you like. But first see CPPD (also in the comments section of Part I) and explain how your assumption is or (can be) supported.

Be careful not to conflate information we gain by studying evidence (F@t-) with our model of how things will turn out in the future (F@t+). In this sense, all models of the world are not equivalent narratives.

"One of the greatest disservices we do to our students is to teach them that universal physical law is something that obviously ought to be true and thus may be legitimately learned by rote."

This isn’t how science is taught. I wonder what teachers you know.

That’s how I was taught. I don’t believe I met a single teacher who could appreciate the complexity of that fundamental shift in perspective properly until I was in 9th grade.

Also, collective organizational phenomenon are not "a qualitative assertion" any less than alpha theory are, and if you are saying that the way we understand (and can understand) the universe is not proportional to our use and understanding of organization (whole collections of em, even!) then you understand the human brain to be much different than I do.

I get the feeling you are reading lots and lots about information theory right now, but what about complexity, chaos, emergence, and the human brain? Did I jump the gun reading these books before understanding information theory?

There’s a big difference between challenging scientific laws with well researched contradictory evidence and challenging scientific laws with unsupported speculation and rhetoric.

Also, collective organizational phenomenon are not "a qualitative assertion" any less than alpha theory are […]

1. What is your hypothesis?
2. How do you propose to collect evidence to support it?
3. What are you measuring?
4. What will you be able to explain better than existing theories?

I get the feeling you are reading lots and lots about information theory right now

We’re trying to write about information theory to make it as accessible as possible. I think we missed the mark on some of our installments and, based on the comments, are trying to ensure we address those oversights.

Did I jump the gun reading these books before understanding information theory?

Not at all. But try to use the suggestions above when framing your argument. Keep it short and self-contained. Three posts in a row with PoMoSpeak is difficult to decipher.

Define your terms or provide links to them. When I see something like "topical complementary mutualism" and Google turns up nothing, I have no idea what you’re talking about.

I reread Part 5, and have come to the same place as I was before, still uncertain as to F.

Humor me, or at least, try to follow where I am leading you, thanks.

Above I characterized F as a function, this was incorrect, in that it is not along the lines of f(x), read: f of x.

But Eustace, or a brain, or a neural subset, or a single neuron, or a synapse, must do something with the members of the set F at some time, t. In this case, the do something, would be to communicate the element(s) of F?

And this communication, again, must occur at some level, or levels, and my question, therefore, is whether or not this communication (what I meant as a function, or process), is energy dependent, and if so, can you account for its entropy?

Why am I asking this?

I have the sense, if I can remove myself from Eustace, and speak now of the human nervous system, that in order to allow for the universality of alpha, as a process, as an event, or as a function of the nervous system, its substrate should at least be hinted at, or suspected, even if not known.

This seems correct to me because after all, alpha occurs somewhere, in some space and at some time. The space is between our ears, somewhere, and the time, well, it occurs all the time, but at different intervals and, perhaps, over differing durations (serial alphas.)

Now as a mathematical model, perhaps for now, these issues do not yet (or ever?) need to be understood. But again, for me, it is the substrate that interests me first, and its application second.

Whether it is our personal experience, or clinical experience, or per Edelman’s model, there are (sub)types of information, or qualia, that hold special or elevated status. These experiences once laid down in our "memory" will, forever henceforth, have sway over subsequent experiences, most notably our access to them and their effect on current qualia.

How this hierarchy of memory (read synaptic signalling and its subtending architecture) is created, maintained, and utilized seems to be missing from the description of how Eustace and its F interact or relate.

The laying down of memory (the creation of the elements of F), its storage and retrieval (the communication of elements of F), seemingly needs to be accounted for, no?

You write that "in order to allow for the universality of alpha, as a process, as an event, or as a function of the nervous system, its substrate should at least be hinted at, or suspected, even if not known." OK, let’s talk substrate.

I understand you to be asking two related questions here. First, what undergirds alpha? And second, what is alpha itself? It might help to have another look at the original alpha equation:

α = (ΣH – TΣS negative) / TΣS positive

where H is enthalpy, T is temperature, and S is entropy. All three terms in the equation are in units of energy, and they are all deltas. This makes the answer to the first question apparent: energy flux undergirds alpha. The second question is a bit trickier, but I took a stab at it in Part 2, where I wrote, "[alpha] can be thought of as the rate at which the free energy in a system is directed toward coherence, rather than dissipation. It is the measure of the stability of a system." Here we begin to reach the limits of language. It is dangerous to try to reify a rate. "What is" questions, in science, traditionally go nowhere. My next post, on entropy, will elaborate. Thinking of entropy as "disorder" is OK in thermodynamics, but in communication theory it leads to disastrous misunderstandings, although the equations are identical. What we want to ask instead is, how can we quantify it? What relations obtain?

As for your other post, which crossed my reply: sure, we want to know how memory works, how human beings process F, all of those nifty things. But we haven’t even introduced consciousness yet. For now we claim only that Eustace employs an alpha model to react to F. Give us a chance here. The next installment will bring us quite a bit closer to the human realm.

Here’s another try. I think you’ve latched on to an incorrect interpretation of filtration.

It’s important to use terms only as they’re defined in Parts 2-6. More definitions are to come and the specifics will follow the abstract. There will be applications of the principles we’ve established but we need to be thorough.

The notion of a filtration is reminiscent of the artist’s concept of negative space–to understand the object, first understand the space around it. Here it’s a little tricky–we’re dealing with a dynamic process so rather than space, consider the same concept in time. The positive space is its present state. Its negative space are the events in its history–its filtration.

But Eustace, or a brain, or a neural subset, or a single neuron, or a synapse, must do something with the members of the set F at some time, t. In this case, the do something, would be to communicate the element(s) of F?

Eustace [or a brain, or a neural subset, or a single neuron] is affected by a stimulus or interaction and may, in turn, react by affecting something else or changing its own configuration.

But let’s start with something much simpler so we don’t get buried in details. Consider a stone in a river. Its surface is shaped by numerous interactions with the passing fluid. All those interactions, collectively, are the subset of F@t that manifest in its shape now.

Now that’s not very dynamic. Let’s try something a bit more complicated. Consider a G protein-linked receptor. Ligand binding activates a G protein, which in turn activates or inhibits an enzyme that generates a specific second messenger or ion channel, causing change in membrane potential.

How rich and diverse this effect is depends on the system complexity, context and how Eustace adapts to these stimuli. Which of these configurations are selected depends on their alpha.

One of the founders of modern probability, Kolmogorov, wrote: "the epistemological value of probability theory is based on the fact that chance phenomena, considered collectively and on a grand scale, create a non-random regularity."

Similar patterns crop up in models of dynamic, interacting systems, ranging for neuron depolarization to forest fires to mass extinction. In each case, an individual element or class of element is subjected to some pressure or stimulus, builds up toward a threshold, then suddenly relieves this stress and spreads it to others, potentially triggering a domino effect.

If a given Eustace can adapt to these regularities, it will have a selective advantage. In order to adapt, it needs complexity sufficient to capture these regularities or patterns. The "information set" Eustace uses can never be greater than F@t. You appear to be using F like a self-contained, independent object. Rather, F is analogous to context.

How this hierarchy of memory (read synaptic signalling and its subtending architecture) is created, maintained, and utilized seems to be missing from the description of how Eustace and its F interact or relate.

The resulting nascent complex structures make possible yet more complex structures. This complexity offers additional degrees of freedom that can then convert available free energy into yet more complex structures.

If a particular receptor configuration leads a bacteria to directed chemotaxis up a maltose gradient, it will more effectively harvest free energy than one that moves in the opposite direction.

This process of converting free energy into complexity and organization (alpha) can lead, for example, from transmembrane signalling to intercellular signalling.

And yes, the notion of threshold (especially as it relates to neuronal functioning) is something I understand and appreciate well.

Here: "You appear to be using F like a self-contained, independent object. Rather, F is analogous to context."

Yes and no.

I understand the idea of F as context. This, by the way, was well put. Also receptor signalling seems like a good example. G-proteins are just one example. The regulation of post-translational receptor dynamics is another. Models of epilepsy (e,g, secondary human epileptogenesis) and chronic pain (peripheral and central sensitization) are clinical examples.

My concern, or confusion (take your pick) is in trying to place F somewhere. It is the where, and what under what rules that structural system operates, and what energetic contraints that system must follow, or is subject to, is from where I have been inquiring. In other words, if the structural substrate has its own alpha effect, is it accounted for in the "larger" alpha?

My concern, or confusion (take your pick) is in trying to place F somewhere.

You can’t place F somewhere, you place it sometime. F@t is everywhere. In our model, it is the wake of all thermodynamic interactions. All Eustaces are exposed to some subset of F.

In other words, if the structural substrate has its own alpha effect, is it accounted for in the "larger" alpha?

But where does this "larger" alpha come from?

Since we are working within the constraints of a conservation law, any alpha that manifests in some larger structure must arise from some interaction within that structure or through some interaction with another structure. And that alpha, in turn, must arise from yet finer grained interaction. This "gears within gears" interaction extends down to the level of individual chemical kinetics where one molecule or molecular configuration is changed into another.

At all times, each cell is teeming with choreographed chemical reactions–all of which can be reproduced in vitro. These cells take in free energy to perform synthesis and repair (increase numerator) and to release waste (decrease denominator). They can interact with each other through their boundaries. This higher-order, macro-interaction may lead to more alphatropic configurations.

The choreography in this apparent chaos is made possible by the specificity of enzyme reactions (our form of biased coin). Each enzyme possesses two extreme and opposite properties simultaneously: enormous reactivity to its substrate and near total indifference to other molecules. Because of this indifference, everything actually proceeds as if each of the numerous reactions were taking place alone within the cell according to the "one reaction, one reactor" principle.

At their boundaries, cells are dealing primarily with diffusion driven interactions. But this isn’t the whole story. To have any chance to adapt Poisson-type phenomena, each Eustace must also be able to react to second order signals (eg smell, sound, vibrations) that correspond to an impending event. In other words, these second order interactions cause it to respond before the actual even occurs. How much before depends on some notion of information.

But now we’re getting into material to be covered in upcoming installments. In the meantime, let me know if this clears things up.

If I took the time and had the money, where might I meet you (say that famed cofee shop,) to discuss this in person. I understand things about 500 times better when someone is speaking to me than when I sit alone inside my head bouncing random interpretations off each other to process the elimination of unnecessary clutter. By chanc e, might you have a spare weekend in the next whenever? I am refering to several of you, save bill, who still won’t email me his equations

It’s definitely easier to understand this material if there’s someone around to explain it. But it’s too much material for a weekend.

From the information on your LiveJournal page, you’re about two to three hours from some excellent universities. You should consider starting there. Most departments have free weekly colloquia that are open to the public. These are good opportunities to get to know professors and students. Be polite and courteous. Spend most of your time listening.

Once you’ve developed a rapport, you can ask questions. Don’t start by asking someone to explain all of alpha theory to you. No one will have heard of it. Fortunately, everything in alpha theory is built on some branch of science so if you break out your questions and phrase them properly, they’ll fit right in. And more often than not, someone will be happy to answer them.

If this sounds like a lot of work, it is. But it can also be a lot of fun. Most importantly, alpha theory aside, it may provide some food for thought about what you might want to do next.

As to where the network resides remains in doubt. There are those who believe that epileptogenic behavior resides in the function (and control) of membrane ion channels, others hold to the (traditional) belief that the behavior represents the function of neuronal groups or subsets.

Perhaps from your learning about my clinical perspective (experience) you can see why I focus on neuroanatomical localization.

Back to Gdel:

In the Yourgrau book, especially at its end, the case is made for Gdel as the philosopher. This discussion is preceded by showing how Gdel was able to distinguish the formal from the intuitive. And it is here, in the intuitive, where matters of human consciousness diverge from the "thinking machine," including those who demonstrate emergent behaviors.

And from Gdel to Edelman, well… eventually.

The Alpha model is something that I appreciate (apprehend), but as these past months have shown, not fully understood.

Your patience and persistence are greatly appreciated.

Lastly, from Gdel: "Actually, it would be easy to get a strict ethics–at least no harder that other basic scientific problems. Only the result would be unpleasant, and one does not to see it and avoids facing it–to some extent even consciously." Yourgrau, p.165

Your questions are much appreciated. We’ve mentioned in the past that we’re developing this idea for a book that makes a seemingly outlandish claim. A forum like this is just the place to avoid the associated headaches.

The installments and discussions have been very helpful in identifying where our explanations need clarification and expansion.

We’re approaching this problem from the bottom up. Let’s summarize what we have so far:

Francois Jacob pointed out: The beginning of modern science can be dated from the time when such general questions as "How was the Universe created? What is life?" were replaced by more modest questions like "How does a stone fall? How does water flow in a tube?"

Respecting this sentiment, we’re taking a thorough though tedious path through the science. These installments have helped to gradually change the responses of my friends and colleagues from a priori rejection to guarded interest to outright enthusiasm.

Perhaps from your learning about my clinical perspective (experience) you can see why I focus on neuroanatomical localization.

Agreed. Unfortunately, for something as daunting as neuroscience, we can’t go from bottom up to top down at this stage. We’re still missing some pieces (or marbles!). We extrapolated to ethics because, as yet, there’s no science there at all.

In the meantime, let me know if Resnick doesn’t clear up the notion of filtration as we’ve been (somewhat informally) using it.

Actually, it would be easy to get a strict ethics–at least no harder that other basic scientific problems.

If I may put on my pedant hat for a minute, and go off on a philosophical digression:

As Occam would say, "Look, ma, no creator!"

No, actually the man would say the exact opposite. Will of Ockham believed that the only strictly necessary entity is God, and that everything else was contingent. Amusingly, answering "God did it" to every question is indeed the most consistent application of his eponymous Razor, since it only requires one postulate. Of course it explains absolutely nothing, which is why "Occam’s Razor" tentatively gets my vote for being the most misunderstood and abused concept in epistemology.

Not a problem. It’s all very philosophy inside-baseball, and I’d probably be better off knowing more about math instead.

I prefer to forget Occam and invoke Popper instead (surprise): explanatory power is equivalent to degree of falsifiability (since the more possibilities theory prohibits, the more it explains), which is equivalent to simplicity (since the more complex a theory is, the harder it is to falsify), which is equivalent to the paucity of parameters in the theory (since the fewer parameters there are, the simpler and more falsifiable the theory is).

The difficulty people have in grasping these ideas is a reminder that mathematical thinking is, at some level, deeply unnatural. It goes against all the grain of human thought and language. Never mind analysis, this is true even of basic arithmetic. In the preface to Principia Mathematica, Whitehead and Russel note that

The very abstract simplicity of ideas of this work defeat language. Language can represent complex ideas more easily. The proposition "a whale is big" represents language at its best, giving terse expression to a complicated fact; while the true analysis of "one is a number" leads, in language, to an intolerable prolixity.

This is surely right. A whale is, by any standard of complexity that makes sense, a vastly more complicated thing than "five", yet it is a much easier thing for the human mind to apprehend. Any tribe of human beings that was acquainted with whales would certainly have a word for them in their language; yet there are peoples whose language has no word for "five" even though five-ness is there, quite literally, at their fingertips!

I repeat, mathematical thinking is a deeply unnatural way of thinking, and this is probably why it repels so many people. And yet, if that repulsion can be overcome, what benefits flow!

The first of these methods is the method of isomorphism. This depends on the supposition that, if in two hypotheses the consequences are the same, the two hypotheses may be considered as identical for all purposes of further reasoning. In other words, there is no use in drawing arbitrary distinctions where none really exist. When we reason from a hypothesis, its consequences come into play at every step of the reasoning; and if those consequences are the same, all reasoning will be the same, and therefore no difference can really be drawn. Again, a question of decision between two theories whose consequences are and must be the same must necessarily be one where no evidence is obtainable, and is therefore a question which cannot be discussed at all. It is like the old question of the man and the monkey: "If a monkey is on a pole, constantly facing a man who walks around the pole, has the man gone round the monkey?"

I don’t follow as closely as I did — no time. One quesry: There seems to be some sort of relationship being introduced between thermodynamics and information theory. I don’t see it. Where is the statistical mechanics angle of information theory? Doesn’t the concept of enthalpy entail a pressure integral? Where is the pressure integral in information theory?

Bourbaki, first of all, math is language. Tell me why it isn’t. I will help you to see what I see, and though I may be wrong, if I know one thing (I might not) it is language. A system of description that can both explain itself and else. Language. Math is not language?

By the way, it occured to me that you still have not read Sidis’ rape of the reversibility of thermodynamics. Reading as many books as you do/are, it is rediculous that the one book that offers a contrary interpreation of thermodynamics (a book of very adequate language and description) has not been read, or did you read it? If nothing else, his interpretation of the reversibility of thermodynamics indicates that the laws are emergent. That is, if what he says is true. It certainly sinks (sp :P) with what Loughlin intuits.

Not so fast, please. Delay throwing your stones at my meager glass house until giving the work at least some VERY serious consideration, seeing as how the man was undenably one of the most brilliant men EVER. His dad taught himself to read and write english in 4 months while holding a full time job and mastering (self taught) advanced mathematics, and he made his dad look like a ninny.

"And he wrote books — some under his own name, others under pseudonyms. In 1925 he published a remarkable book on cosmology in which he predicted black holes –14 years before Chandrasekhar did." This means he knew about this (and much else) long before he published it, if the history of the work can be believed. This is a quote from "John Lienhard, at the University of Houston, where we’re interested in the way inventive minds work." a part of his Engines of our Ingenuity series.

You can define language any what you like but there are strict operational differences between spoken language and mathematics. Any statement in human spoken language is predicated on context.

"Your bike is in the window."

On a nice, sunny day this may imply that it is on display. During a hurricane, it may indicate that it has been tossed through the glass. Or it may be some idiomatic phrase that means "you’re screwed". Its content is plastic. This versatility is its strength.

Mathematics doesn’t operate this way. Aside from a core set of axioms and operators, you can not write down an equation, have someone else calculate the result and then claim that it was not what you meant. This precision is its strength.

For example, if we grind that bike in the window into its elemental constituents, please tell me where the information to reverse the process is stored. Maxwell’s demon has some serious information requirements and energy costs.

However, information storage does not require Maxwell’s demon while heat storage certainly does.

Maxwell’s demon is fictitious. We don’t require him at all. He was a thought experiment designed to circumvent the second law. The flaw in the theory was that it did not account for the tremendous amount of information the demon needed to do his job. The storage and processing of information requires energy which is expressed in the same units as heat storage. This is our bridge.

The better question, though, is why they are alike at all.

The concept of entropy in thermodynamics and in information theory are not the same thing. However, if you look at Shannon’s paper, you’ll recognize similarities.

That gets us back to statistical mechanics and the pressure integral.

Take a look at the free energy equation.

G = H – TS

This differentiates to

dG = dH – TdS – SdT

But in biology, the systems are approximately isothermal (dT=0) so that last term is dropped to produce the more common

dG = dH – TdS

No temperature integral.

These relationships aren’t produced arbitrarily. See how Legendre transforms are applied. You’ll see that the Gibbs free energy equation is already modified for biological conditions.

Of course, I am far too stupid to understand anything about biological systems.

"Precise: exactly or sharply defined or stated,"
this is inherently a matter of context.

Precision: the degree of refinement with which an operation is performed or a measurement stated — compare ACCURACY… this is a property of the spoken word as well, and if you are implying that it is not possible to misinterpret equations or mathematical represenation then it is not possible for there to be any degree of accuracy, there is simply static precision, which, it that is true, is you using the condition to explain the symptoms, or at least rather tautological.

2b b : the accuracy (as in binary or decimal places) with which a number can be represented usually expressed in terms of the number of computer words available for representation

No experience would count as grounds for revising, for example, that 5 + 7 = 12. Were we to add up 5 things and 7 things and get 13 things, we would recount. Should we still, after repeated recounting, get 13 things we would assume that one of the 12 things had split or that we were seeing double or dreaming or even going mad. The truth that 5 + 7 = 12 is used to evaluate counting experience, not the other way round.

The a priori nature of mathematics is a complicated, confusing sort of a thing. It’s what makes mathematics so conclusive, so incorrigible: Once proved, a theorem is immune from empirical revision. There is, in general, a sort of invulnerability that’s conferred on mathematics, precisely because it’s a priori.

…

Human spoken language is highly adaptive. The rules of grammar change over time. A well-formed phrase in 11th century may no longer be well-formed in the 21st. I chose the term human spoken language as a distinct in kind from mathematics.

"And, mathematics are predicated on context. To say otherwise is to deny their very "precise" importance.

Aside from the axioms and operators of decimal arithmetic, please identify the context necessary for

1 + 2 = 3

to be true."

Please show that 1 + 2 = 3 to any aborignal african and ask them if it is true? They wouldn’t know what the fuck you were talking about/showing them. It would simply be gibberish. Perhaps, as you suggest, mathematics is not relative to context, but the understanding of it inherently is. And parallel to those lines, heh, I CAN BE JUST AS ABSOLUTE WITH LANGUAGE.

Car always means "car", even if it means something else. Whether car means "pig" is based on the context. I can call a pig "car". But regardless, in every way possible, Car does mean car, beyond what else it means.

And also, I wouldn’t use 1 + 2 = 3 as some absolute and inclusive representative of mathematics.

This is neither how language nor mathematics works. You’re assigning a problem with human spoken language to mathematics.

Either that or you’re defining "language" as some sort of overarching postmodern reality framework.

Either way your reasoning is flawed.

You’re saying that someone can have 2 children and have 1 more and not end up having had 3 children?

We don’t need to go to Africa to demonstrate innumeracy.

Car always means "car", even if it means something else.

First, you’re referring to definitions not grammar. Human spoken language is much more than definitions. For example, cadence and word choice among words that have the same definition influence the meaning of the message differently. Second, even the definitions of words change over time. Finally, some words are retired while others are introduced–and they need not have any relationship to what came before.

And also, I wouldn’t use 1 + 2 = 3 as some absolute and inclusive representative of mathematics.

Read the latest Scientific American. There is an article on how language and mathematics are processed in different parts of the brain.

I have heard that one explanation for male preeminence in mathematics is the female inclination to be able to communicate. Thus, it is hypothesized, women spend more time and energy trying to explain, in language, what they are doing rather than just manipulating mathematical symbols. This added burden makes them less likely to be able to advance the state of the art, but probably makes them better at teaching it.

Supplant "it" with "though" and you take my meaning precisely. Meaning is immuatable in the same sense that math is. And of course I am saying that having two kids and having one more makes three kids, but what if the first two died, how many kids do we have then? 1, and you can assess that via the means of subtraction. 3 total – 2 dead leaves us with one. But, the outcome of that result is relative to the circumstance in just the same way that the describing it with "words" is. Irregardless of how you tried to twist my meaning into those easily countered points, the only point I made that need be understood precedes this sentence, and it is a point that you did not see at all. "Innumeracy" was not my argument, that is stupid, first of all because I don’t know what it means :O, and second of all because you asked how math was not relative to context, and I gave you a perfect example of how it was, BUT, even better than me, you did. If we are adding up kids, then the context of that description (with mathematical symbols, shapes, I don’t care) is what makes it neccessarily true, namely, that there are three kids. The fact that 1 plus 2 makes three is not isomophic with the fact that there are three kids, and the principles at work in defedning that 1 and 2 make three are not the same principles that can assuage us of the lack of relevance of context. Therefore, your previous statement of "precision" is pointedly biased, absurd in the context that I just spoke of, and yet perfectly right in the means by which you meant it, and by what you actually used it to mean. Essentially, I was taking your argument somewhere you did not, and to relate that back to me, you essentially did the same thing to what I said. If I apologize first, you promise to do it also

Also, Sidis’ conclusions about the 2nd law, namely, that it is merely the product of the way in which humans view the dynamics of energy, is not discounted by the links you provided, and shares similarities to Wolfram’s take on the 2nd law and also the interpretation of the quantumists at Quantonics.com, where they have 4-6 thought proviking definitions of "entropic stages" I suppose you could say.

Also: let’s say 2 a family just had two kids, and then had a third, BY YOUR MEANING, the family has three kids. But, by mine, the family has 3 kids, but also 5 kids, because there was twins born before the "2 plus the 1" and then, 2 years later, another way born, giving them six. So, yes, they have 3 kids, but they also have 6, five and one. I am not twisting "definitions" here, this is immutable just as a number is, and since a number is simply a represention of an idea, I cannot see how you can claim precision, when by its very nature it is of an almost infinite variability and plasticity in realtion to the forms of its representations, or rather, what it is representing. Though 2 always represents two, it does not always represent two "something".

One question: Do information theory and energy dynamics (1-3rd laws)explain themselves without relation to humans, and, if so, by what special relations are we then connecting ourselves to them that was excluded in the account of their actuality outside of our understanding? Or, better yet, outside the means of our understanding?

Though 2 always represents two, it does not always represent two "something", as sometimes its actual properties are being realized by the mere fact that it CAN represent two somethings, or that it always does. This is a deconstruction of the MEANING, not of the definition. The definition is the means by which we get the meaning, obviously.

Actually that is not the SCIAM article I was thinking of. The one to which I refer is in the latest edition. It discusses a study of men with brain lesions who have lost their ability to discern the difference between "Man bites dog" and "Dog bites man" yet retain the ability to distinguish "49-4" from "4-49".

"In fact, most of us were inclined to regard the symmetry of elementary particles with respect to right and left as a necessary consequence of the general principle of right-left symmetry of Nature. Thanks to Lee and Yang and the experimental discoveries inspired by them we now know that this was a mistake."

As for the economist, Summers, he suffered the fate of anyone who thinks with their mouth. Modern civilization has a very embarrassing track record dealing with differences among peoples–whether imaginary or real.

Anyone who has spent any time in the People’s Republic of Cambridge should have known better.

"Many European languages (including English) use the same word for "right" (in a directional sense) to mean "correct, proper". Throughout history, being left-handed was considered as negative the Latin and Italian word sinistra (from which the English ‘sinister’ was derived) means "left". There are many negative connotations associated with the word "left-handed": clumsy, awkward, unlucky, insincere, sinister, malicious, and so on. French gauche, meaning "left", means "awkward or clumsy" in English, whereas French droit is cognate with English "adroit", meaning dexterous, skillful with the hands. As these are all very old words, they would tend to support theories indicating that the predominance of righthandedness is an extremely old phenomenon."

According to Sperry, et. al. the functional areas in the hemispheres are selectively suppressed.

"The reasoning here says that left lesions in the presence of the commissures act to prevent the expression of latent function, actually present but suppressed, within the undamaged right hemisphere."

This is from the early 80s before widespread availability of fMRI and PET. Do we have a different picture today?

"Only after the intact right hemisphere is released from its integration with the disruptive and suppressive influence of the damaged hemisphere, as effected by commissurotomy, can its own residual function become effective."

Have you worked with commissurotomy patients? From what little I’ve read, they’re functional but different. But in what way?

And yes, Wernike, Jackson, Sherrington, et al, all contributed to the notion that X marks the spot.

Back in the day, when I was part of a surgical epilepsy team, we would study patients in the OR (operating room) when their brain was exposed and they were awake. This was necessary in that if the epileptogenic focus was near http://brain.oupjournals.org/cgi/content/full/124/9/1683&quot; eloquent brain the surgical removal would be tailored to try to do limit any interruption of language (actually, lateralizing language pre-operatively was also done using intracarotid injection of amobarbital, i.e. the so-called http://www.neuro.mcg.edu/np/wada.html&quot;Wada test.

But as we have come to learn, there is much more to the organization of behavior than was previously thought.

Consciousness especially, and to a lesser extent memory and language, is now believed to be subtended over large networks.

And as far as symmetry goes, the example of language (and associated with it handedness) is the most interesting in that something as complex as language (imagine the selection required for it to emerge and be sustained) can reside either in the left or right hemisphere. There is no other example in human anatomy that allows for this type of difference in location at the same frequency (there are rare cases of dextrocardia but their incidence in no way approaches that of right hemisphere dominant language.)

The fascination with these individuals comes from the observation that each of the hemispheres can act in often dramatic ways, as if one hemisphere is truly independent of the other. Two brains, two individual consciousnesses.

I remember first learning about Lee and Yang’s experiments in The Ambidextrous Universe by Martin Gardner. The results nearly floored me. Who would have thunk it? I recommend the book since my remaining brain cells still recall its impact after close to 25 years.

Matt,

I’m glad you liked Axelrod. Excellent the book is, but I’m not so sure of its topicality (is that a word?).

Summers is a coward. If he was going to float a politically incorrect hypothesis for the sake of provocation then great, but since then he’s done a marathon’s worth of backpeddaling. He should have came prepared to play it to the hilt or not at all; they can smell fear and will hold it over him forever now. (Anyone interested in the topic: Pinker and Spelke held a debate here. Best "checkmate" moment comes from Pinker near the end where he makes the point that prejudice doesn’t seem to be holding women down in other academic subjects.)

Bill: "Topicality" works as a word and that’s all that matters. And I think game theory bears pretty directly on all kinds of Eustaces.

I’ve only read 1 (the 3rd) of Aaron’s 9 recommended reading list. 2 if you consider Aaron’s father’s suggestion. 3 if you consider someone else’s Wolfram suggestion. 4 if you consider the name of the site.