About Rationally Speaking

Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Friday, October 26, 2012

From the naturalism workshop, part I

by Massimo Pigliucci

Well, here we are, in Stockbridge, MA, in the middle of the Berkshires, sitting at a table that features of good number of very sharp minds, and yours truly. This gathering is the brainchild of cosmologist Sean Carroll, entitled “Moving Naturalism forward,” its point being to see what a bunch of biologists, physicists, philosophers and assorted others think about life, the universe and everything. And we have three days to do it. Participants included: Sean Carroll, Jerry Coyne, Richard Dawkins, Terrence Deacon, Simon DeDeo, Dan Dennett, Owen Flanagan, Rebecca Goldstein, Janna Levin, David Poeppel, Alex Rosenberg, Don Ross, Steven Weinberg, and myself.

Note to the gentle reader: although Sean has put together an agenda of broad topics to be discussed, this post and the ones following it will inevitably have the feel of a stream of consciousness. But one that will be interesting nonetheless, I hope!

During the roundtable introductions, Dawkins (as well as the rest of us) was asked what he would be willing to change his mind about; he said he couldn’t conceive of a sensible alternative to naturalism. Rosenberg, interestingly, brought up the (hypothetical) example of finding God’s signature in a DNA molecule (just like Craig Venter has actually done). Dawkins admitted that that would do it, though immediately raised the more likely possibility that that would be a practical joke played by a superhuman — but not supernatural — intelligence. Coyne then commented that there is no sensible distinction between superhuman and supernatural, in a nod to Clarke’s third law.

There appeared to be some interesting differences within the group. For instance, Rosenberg clearly has no problem with a straightforward functionalist computational theory of the mind; DeDeo accepts it, but feels uncomfortable about it; and Deacon outright rejects it, without embracing any kind of mystical woo. Steven Weinberg asked the question of whether — if a strong version of artificial intelligence is possible — it follows that we should be nice to computers.

The first actual session was about the nature of reality, with an introduction by Alex Rosenberg. His position is self-professedly scientistic, reductionist and nihilist, as presented in his The Atheist’s Guide to Reality. (Rationally Speaking published a critical review of that book, penned by Michael Ruse.) Alex thinks that complex phenomena — including of course consciousness, free will, etc. — are not just compatible with, but determined by and reducible to, the fundamental level of physics. (Except, of course, that there appears not to be any such thing as the fundamental level, at least not in terms of micro-things and micro-bangings.)

The first response came from Don Ross (co-author with James Ladyman of Every Thing Must Go), who correctly pointed out that Rosenberg’s position is essentially a statement of metaphysical faith, given that fundamental physics cannot, in fact, derive the phenomena and explanations of interest to the special sciences (defined here as everything that is not fundamental physics).

Weinberg made the interesting point that when we ask whether X is “real” (where X may be protons or free will) the answer may be yes, with the qualification of what one means by the term “real.” Protons, in other words (and contra both Rosenberg and Coyne), are as real as free will for Weinberg, but that qualifier means different things when applied to protons than it does when applied to free will.

In response to Weinberg’s example that, say, the American Constitution “exists” not just as a piece of paper made of particles, Rosenberg did admit that the major problem for his philosophical views is the ontological status of abstract concepts, especially mathematical ones as they relate to the physical description of the world (like Schrödinger’s equation, for instance).

Dennett asked Rosenberg if he is concerned about the political consequences of his push for reductionism and nihilism. Rosenberg, to his credit, agreed that he has been very worried about this. But of course from a philosophical and epistemological standpoint nothing hinges on the political consequences of a given view, if such a view is indeed correct.

Following somewhat of a side track, Dennett, Dawkins and Coyne had a discussion about the use of the word “design” when applied to both biological adaptations and human-made objects. Contra Dawkins and Coyne, Dennett defends the use of the term design in biology, because biologists ask the question “what is this [structure, behavior] for?” thus honestly reintroducing talk of function and purpose in science. A broader point made by Dennett, which I’m sure will become relevant to further discussions, is that the appearance on earth of beings capable of reflecting on things makes for a huge break from everything else in the biosphere, a break that ought to be taken seriously when we talk about purpose and related concepts.

Owen Flanagan, talking to Rosenberg, saw no reason to “go eliminativist” on the basic furniture of the universe, which includes a lot more than just fermionsqua fermions (see also bosons): it also includes consciousness, thoughts, libraries, and so on. And he also pointed out that, again, Rosenberg’s ontology potentially gets into serious trouble if we decide that things like mathematical objects are real in an interesting sense of the term (because they are not made of fermions). Flanagan pointed out that what we were doing in that room had to do with the meaning of the words being exchanged, not just with the movement of air molecules and the propagation of sounds, and that it is next to impossible to talk about meaning without teleology (not, he was immediately careful to add, in the Cartesian sense of the term).

Again interestingly, even surprisingly, Rosenberg agreed that meaning poses a huge problem for a scientistic account of the world, for a variety of reasons brought up by a number of philosophers, including Dennett and John Searle (the latter arguing along very different lines from the former, of course). He was worried that this will give comfort to anti-naturalists, but I pointed out that not being able to give a scientific (as distinct from a scientistic) account of something — now or ever (after all, there are presumably epistemic limits to human reason and knowledge) does not give much logical comfort to the super-naturalist, who would simply be arguing from ignorance.

Poeppel asked Rosenberg what he thinks explanations are, I assumed in the context of the obvious fact that fundamental physics does not actually explain the subject matters of the special sciences. Rosenberg’s answer was that explanations are a way to ally “epistemic hitches” that human beings have. At which point Dennett accused Rosenberg of being an essentialist philosopher (a la Parmenides), making a distinction between explanations in the everyday sense of the word and real explanations, such as those provided by science. But, argued Dennett, this is a very old fashioned way of doing philosophy, and it treats science in a more fundamentalist (not Dennett’s term) way than (most) scientists themselves do.

The afternoon session was devoted to evolution, complexity and emergence, with Terrence Deacon giving the introductory remarks. He began by raising the question of how do we figure out what does and does not fit in naturalism. His naturalistic ontology is clearly broader than Rosenberg’s, including, for instance, teleology (in the same sense as espoused earlier in the day by Dennett). Deacon rejects what Dennett calls “greedy” reductionism, because there are complex systems, relations, and other things that don’t sit well with extreme forms of reductionism. Relatedly, he suggested (and I agreed) that we need to get rid of talk of both “top-down” and indeed “bottom-up” causality, because it constrains us to think about the world in ways that are not useful. (Of course, top-down causality is precisely the thing rejected by greedy reductionists, while the idea that causality only goes bottom-up is the thing rejected by anti-reductionists.)

Ross concurred, and proposed that another good thing to do would be to stop talking about “levels” of organizations of reality and instead think about the scale of things (the concept of “scale” can be made non-arbitrary by referring to measurable degrees of complexity and/or to scales of energy). Not surprisingly, Weinberg insisted on the word levels, because he wants to say that every higher level does reduce to the bottom lowest one.

Deacon is interested in emergence because of the issue of the origin of life understood (metaphorically speaking) as a “phase transition” of sorts, which is in turn related to the question of how (biological) information “deals with” the constraints imposed by the second law of thermodynamics. In other words: the interesting question here is how did a certain class of information-rich complex systems manage to locally avoid the second law-mandated constant increase in entropy. (Note: Deacon was most definitely not endorsing a form of vitalism according to which life defies — globally — the second principle of thermodynamics. So this discussion is relevant because it sets out a different way of thinking about what it means for complex systems to be compatible with but not entirely determined by the fundamental laws of physics.)

All of the above, said Deacon, is tied up in what we mean by information, and he suggested that the well known Shannon formulation of information — as interesting as it is — is not sufficient to deal with the teleologically-oriented type of information that characterizes living organisms in general, and of course consciously purposeful human beings in particular.

Dennett seemed to have quite a bit of sympathy with Deacon’s ideas, though he focused on pre- or proto-Darwinian processes as a way to generate those information-rich, cumulative, second principle (locally) defying systems that we refer to as biological.

Rosenberg, as usual, didn’t seem to “be bothered by” the fact that we don’t have a good reductionist account of the origin of life. Methinks Rosenberg should be bothered a bit more by things for which reductionism doesn’t have an account and where emergentism seems to be doing better.

At this point I asked Weinberg (who has actually read my blog series on emergence on his way to the workshop!) why he thinks that the behavior of complex systems is “entailed” by the fundamental laws. He conceded two important points, the second one of which is crucial: first, he readily agreed that of course nobody can (and likely will ever be able to) actually reduce, say, biology to physics (or even condensed matter physics to sub-nuclear physics); so, epistemic reduction isn’t the game at all. Second, he said that nobody really knows if ultimate (i.e., ontological) reduction is possible in principle, which was precisely my point; his only argument in favor of greedy reductionism seems to be a (weak) historical induction: physicists have so far been successful in reducing, so there is no reason to think they won’t be able to keep doing it. Even without invoking Hume’s problem of induction, there is actually very good historical evidence that physicists have been able to do so only within very restricted domains of application. It was gratifying that someone as smart and knowledgeable in physics as Weinberg couldn’t back up his reductionism with anything more than this. However, Levin agreed with Weinberg, insisting on the a priori logical necessity of reduction, given the successes of fundamental physics.

Weinberg also agreed that there are features of, say, phase transitions that are independent of the microphysical constituents of a given system; as well as that accounts of phase transitions in terms of lower level principles are only approximate. But he really thinks that the whole research program of fundamental physics would go down the drain if we accepted a robust sense of emergence. Well, maybe it would (though I don’t think so), but do we have any better reason to accept greedy reductionism than fundamental physicists’ amor proprio? (Or, as Coyne commented, the fact that if we start talking about emergence then the religionists are going to jump the gun for ideological purposes? My response to Jerry was: who cares?)

Don Ross argued that fundamental physics just is the discipline that studies patterns and constraints on what happens that apply everywhere at all times. The special sciences, on the contrary, study patterns and constraints that are more spatially or temporally limited. This can be done without any talk of bottom-up causality, which seems to make the extreme reductionist program simply unnecessary.

Flanagan brought up the existence of principles in the special sciences, like natural selection in biology, or operant conditioning in psychology. He then asked whether the people present imagine that it will ever be possible to derive those principles from fundamental physics. Carroll replied — acknowledging Weinberg’s earlier admission — that no, that will likely not be possible in practice, but in principle... But, again, that seems to me to amount to a metaphysical promissory note that will never be cashed.

Dennett: so, suppose we discover intelligent extraterrestrial life that is based on a very different chemistry from ours. Do we then expect them to have the same or entirely different economics? If lower levels entail (logically) higher phenomena, the answer should be in the negative. And yet, one can easily imagine that similar high-level constraints would act on the alien economy, thereby yielding a convergently similar economy “emerging” from a very different biochemical substrate. The same example, I pointed out, applies to the principle of natural selection. Goldstein and DeDeo engaged in an interesting side discussion on what exactly logical entailment, well, entails, as far as this debate is concerned.

Interesting point by Deacon: emergence is inherently diachronic, i.e., emergent properties are behaviors that did not appear up to a certain time in the history of the universe. This goes nicely with his contention that talk of causality (top-down or bottom-up) is unhelpful. In answer to a question from Rosenberg, Deacon also pointed out that this historical emergence may not have been determined by things that happened before, if the universe is not deterministic but contingent (as there are good reasons to believe).

Simon DeDeo took the floor talking about renormalization theory, which we have already encountered as a major way of thinking about the emergence of phase transitions. Renormalization is a general technique that can be used to move from any group/level to any other, not just in going from fundamental to solid state physics. This means that it could potentially be applied to connecting, say, biology with psychology, if all the involved processes involved finite steps. However, and interestingly, when systems are characterized by effectively infinite steps, mathematicians have shown that this type of group theory is subject to fundamental undecidability (because of the appearance of mathematical singularities). Seems to me that this is precisely the sort of thing we need to operationalize otherwise vague concepts like emergence.

Another implication of what DeDeo was saying is that one could, in practice, reduce thermodynamics (macro-model) to statistical mechanics (micro-model), say. But there is no way to establish (it’s “undecidable”) whether there isn’t another micro-model that is equally compatible with the macro-model, which means that there would be no principled way to establish which micro-model affords the correct reduction. This implies that even synchronic (as opposed to diachronic) reduction is problematic, and that Rosenberg’s refrain, “the physical facts fix all the facts” is not correct. (As a side note, Dennett, Rosenberg and I agreed that DeDeo’s presentation is a way of formalizing the Duhem-Quine thesis in epistemology.)

It occurred to me at this point in the discussion that when reductionists like Weinberg say that higher level phenomena are reducible to lower level laws “plus boundary conditions” (e.g., you derive thermodynamics from statistical mechanics plus additional information about, say, the relationship between temperatures and pressures), they are really sneaking in emergence without acknowledging it. The so-called boundary conditions capture something about the process of emergence, so that it shouldn’t be surprising that the higher level phenomena are describable by a lower level “plus” scenario. After all, nobody here is thinking of emergence as a mystical spooky property.

And then the discussion veered into evolution, and particularly the relationship between the second law of thermodynamics and adaptation by natural selection. Rosenberg’s claim was that the former requires the latter, but both Dennett and I pointed out that that’s a misleading way of putting it: the second law is required for certain complex systems to evolve (in our type of universe, given its laws of physics). But the mere existence of the second law doesn’t necessitate adaptation. Lots of other boundary conditions (again!) are necessary for that to be the case. And it is this tension between fundamental physics requiring (in the strong sense of logical entailment) vs merely being necessary (but not sufficient) for and compatible with certain complex phenomena that captures the major division between the two camps in which participants to the workshop are divided (of course, understanding that there is some porosity between the two camps themselves).

41 comments:

Thank you for your notes. I avoid philosophy because of the corners it backs itself into but I don't mind reading a summary. Fortunately, one doesn't have to have all the answers to Life, the Universe, and Everything in order to be an atheist even if theists demand that we do.

Isn't the flip side of reduction, emergence? If you're reducing you're moving from some level or domain B and realizing it in the operation of some other level, or domain A. Level B emerges on the basis of the operation of level A. Level B supervenes on level A. There are laws, entities or principles specific to each level. What is specific to level B emerges from A. Rosenberg and others claim that evolution involves some kind of level B supervenience where level B has no laws, entities or principles that are distinct from those of level A. That kind of supervenience doesn't seem to exist in reality. We all should recognize that by their very nature reduction and emgergence are opposite sides of the same coin. You can not have one without the other. -Tanner Phillips @philosophyofsci

Second, that is precisely what is diving people here in two (maybe three, with some people being at least partially agnostic) camps: everyone agrees that you either have fundamental reductionism or you have emergence, not both.

OneDay,

> did anyone hug without asking permission? <

Nope, at least not that I saw... As for personalities, one of the nice things at these gatherings is that you get to have dinner or drinks with people you fundamentally (sometimes vehemently) disagree with, and that helps in the human relations department...

Echo the coolness - cannot wait for the video. Too many names, so godspeed to those of you who cheerfully tear down those notions of reality in person. That mystical woo stuff will seem much less mystical when the things you feel or see are shown to be similarly incomprehensible.

- Not having brain scientists mean, effectively, everyone was off topic.- Behavior is the critical dependent variable and mainly this was verbal behavior.- Accepting usage of any -"isms" already throws the conversations into one of ideologies. In fact, there is no such thing as reductionism -- there is just data of specific studies.

For example, if we accept the research/data on no free will and that behavior is driven in 150 ms, then the discussions are fruitless, and epiphenomenal.

The science says everyone is just verbally signalling in a social setting to maintain homeostasis in their brain processes. That's all.

Thus, the only statements that have an empirical basis of psychiatric and brain-based - to some degree.

Why would philosophers ever be invited and not brain researchers!? All philosophers add it tactics so using words/verbal behavior - marketing.

fMRI has completely revolutionized the understanding of the "mind." I find research on mental states in action much more fascinating than splitting hairs over 'isms' but theologians are philosophers and they're the ones who get to represent the believers' sides in debates so we have to acknowledge philosophy as part of the arsenal being used against us.

Never let your opponent define you. By accepting philosophical-theological-ideological framing of what should be only discussions of data and research -- the evidence-based position loses and false ideas take up all the time.

Brain research is the farthest thing from "fascinating" it is the fundamental knowledge of them all since brain processes define animal behavior > human behavior > verbal behavior > all knowledge.

Letting philosophy frame the discussion is as false as letting any other magical thinking and ideology. Ultimately, philosophy is just magical thinking:"words matter" - they don't. Neither does consciousness.

"Deacon also pointed out that this historical emergence may not have been determined by things that happened before, if the universe is not deterministic but contingent (as there are good reasons to believe)."

Hope you'll clarify this, or point me to an earlier discussion. Emergent phenomena, not being spooky, have causal antecedents and a current instantiation base that conforms to physical laws. Given the conditions in play, is it not determined that an irreducible phenomenon emerges from those conditions?

surprised that there was seemingly no discussion of algorithmic compressibility with respect to reductionism/emergence. If one accepts that the universe is essentially computational and that structures have emerged from fundamental physics over time, then bottom-up dependency of these structures is a truism. The real question is whether there exist more algorithmically compressible predictive explanations for various higher level phenomena than just having to essentially re-run the entire simulation. And more pointedly, given a trade-off between compressibility and predictive fidelity, the question is how good can we get along that trade-off frontier for specific phenomena.

Learninglayer, that's a big "If." It's an "if" that I've rejected since I first heard Dennett propose the similar idea nearly 20 years ago that evolution was algorithmic. There's simply no evidence for it.

regardless of whether you strictly buy the "if," I'm simply suggesting that which we cannot explain with a much more compressible description than the phenomenon itself is what we tend to today call "emergent." We used to give it labels such as supernatural, etc.

We do have some useful algorithmic descriptions of evolution (Price equation, etc.). What we don't have are good, compressible descriptions of how evolution emerges from more fundamental properties of the universe. Until/unless we do, it will remain in the realm of the emergent.

See my further thoughts in Workshop Part II comments. I am simply not keen on concepts that rely on a series of observer-dependent qualities, and so am looking for ways to put the concept of emergence on firmer footing such that informed observers can agree as to whether a particular phenomenon should be labeled "emergent" or not, given a common state of information. Or even better, to be able to agree to what *degree* it is emergent. Batterman's approach is already a step in this direction--algorithmic complexity-based approaches seem to me to offer the opportunity to generalize and extend.

If you start with natural language and western philosophical concepts, words and ideas you just end up with a bunch of opinions and personal feelings - as much as words accurately convey internal states. you start in a evidence hole and dig the hole deeper.

if the universe is not deterministic but contingent (as there are good reasons to believe).

I don't understand why contingency and determination are considered mutually exclusive. For example, the evolution of hominids is clearly contingent on the prior evolution of mammals, but that does not stand in a contradiction with the claim (whether it is true or not) that the evolution of hominids was predetermined billions of years ago, with the evolution of mammals one of the intermediate steps in that deterministic chain of events.

contingency means that things could have gone otherwise, which is precluded in a deterministic universe. As far as I know, the only way to get true contingency from the laws of physics is through quantum mechanical indeterminacy.

Thanks, and as far as I can see this is simply a language issue, me not being a native speaker and all. The only meaning of contingent that I was aware of is "dependent on", but apparently it can also mean "uncertain", which I did not know until today but which appears to be the one that is used in philosophy. I find it a bit unfortunate, really, that the same word can have meanings that are so very different, but well, that is for Anglo-Saxons to decide...

As is true with any experiment, the conclusions reached are often colored by a preconception of what the experiment was to confirm. This is subjectivism, which is inherent in any study, since the 'data' is only one component, and subject to interpretation.

From the Libet paper,

"Next, the team repeated Libet's experiment, but this time if, while waiting to act spontaneously, the volunteers heard a click they had to act immediately. The researchers predicted that the fastest response to the click would be seen in those in whom the accumulation of neural noise had neared the threshold – something that would show up in their EEG as a readiness potential.

This is exactly what the team found. In those with slower responses to the click, the readiness potential was absent in the EEG recordings."

Consider the possibility that with fast responses to a stimuli, there may be a choice of action, either a programmed response [the handling of a fast ground ball to a shortstop, or a quick driver's response to someone jutting in front of his moving vehicle], or one of deliberation, however brief. The first occurs in milliseconds [150-250], the second somewhat longer.

A 'readiness potential [RP]' is always there, IMO, based on accumulated experience, but can be overridden by a deliberative response, or by a combination of the two. My assessment is that the brain does in fact make decisions, but in most cases only colors perceptions based on prior data [experience], and may act unilaterally in cases of necessitated urgency.

An example: If a child dashes into ones vehicle path, the braking function is virtually automatic. But as the pedal is slammed, 'conscious intervention [CI]' steps in to possibly (1) terminate the action, or (2) to [in this case] adjust the pedal pressure, due to a quick assessment of speed, road conditions, and distance to subject, for optimal stopping distance.

In the case of the short stop, the conditioned response may be routine [programmed], but be mediated by the ball taking an unusual hop, in which case the fielder may take a sliding dive, a non-standard, conscious, and in effect a mediated response to his usual one.

In short, although the study has implications regarding programmed, or RP type responses, conscious thought by other means may be an operative as well. Both further studies, along with truly objective and unbiased assessments of the data are plainly on the table for consideration.

One final note. I have no a priori notions or desires that free will or a form of duality be true; just rational thought regarding the various data.

Another interpretation of the same data is that the brain is representing the state of the anatomy in awareness by cycles of impulses that build to representations, building to decisions and then more to actions.

Obvious descisions and actions do not arise instantaneously, they arise gradually from subtle preparation that can be short circuited by a subject just by looking at an imperceptible subtle tendency (even a habit) build on a scan and changing their mind.

It's just a recognition of building to perceptual awareness, which has no consequences for free will, merely for self-awareness due to building and subsiding impulses.

I'd like to propose an explanation of Weinberg's use of the word "entails". The behaviour of a system at a lower level entails a higher level phenomenon if the higher level phenomenon could in principle be seen in a simulation of the lower level entities.

For example, it's not unusual to refer to the behaviour of a flock of birds as emergent. But I believe it's the case that the behaviour of a flock of birds can be simulated (roughly) by simulating the behaviour of individual birds and how they react to what they see other birds doing. In that sense, the behaviour of the flock is determined by the behaviour of the individual birds, i.e. the latter "entails" the former.

A similar point could be made about the behaviour of economies, which in principle could be simulated at the level of individual people.

Massimo, would you call these cases of emergence? If not, would I be right in thinking that you are referring to phenomena that couldn't be seen in a simulation of lower level entities?

Personally, I don't know enough about physics to assess examples from physics, such as phase transitions. And I'm willing to entertain the possibility that things are different at the lowest levels of description. (I'm sympathetic to the view that there may be no ultimate level of description of nature, and no ultimate entities.) But at higher levels I see no useful role for the concept of emergence.

The problem is there is no definition of emergence, its just an opportunity to talk about your favorite area of study and call it emergent in any company, good for academics to muse over, but a red herring.

Everything evolves by aggregation of hydrogen & helium (or their constituents) after the big bang, right through to aggregation as a human being. Tracking the potential of H & He to do that is the aim of science generally, and it by changing properties as the constituent particles & fields arrange in different ways. That's evolution, understandable in retrospect.

Emergence is nothing. It's just saying that as constituents aggregate they become complicated beyond an understanding of their 'constiuents' separately. That's evolution, understandable in retrospect, nothing more. Any system is a 'whole' that can be tracked historically to see how it was created from constituents.

We need more endeavor in tracking and explaining and less in useless red herrings that try to flip concepts to no purpose. The plot gets thick quick after the big bang: who could say that the properties of hydrogen could change crucially into deuterium just by aggregating a neutron with a proton and yet, they have that...potential.

The problem is that you haven't read what I just wrote in the past few days. I gave you a very articulated definition of emergence (from Humphreys) and a solid mathematical description of emergent phenomena (from Batterman). What else do you want?

> That's evolution, understandable in retrospect, nothing more <

I'm an evolutionary biologist, and I can tell you that that's just wishful thinking, nothing more.

Kevin,

> There probably is nothing like "rational thought." <

Seriously? So math and logic, just to pick two examples, aren't examples of rational thinking? Wow.

If it's not understandable in retrospect, its not going to be predictable either, so you are outside science there. Emergence is just ignorance of retrospect as the tool to explain, in preference for amazement that things change when they aggregate.

The fact appears to be that what we call rational thought is just locally, normed talk (chit chat) and is entirely epiphenomenal and may be deceptive about actual brain processes controlling behavior.

A rule of nature is that "Information is expensive" and so called "thought" which is really just social verbal behavior/self reports is very cheap and easy to collect. Thus, it has little information value.

Consciousness and weighing of options etc. is not needed to say do math. That appears to happen largely unconsciously in in milliseconds.

But the sales pitch of "rational thought" is a very clever and effective sales promise/tactic. But like most sales promises likely a simple lie.

I responded to the posts purporting to define what is merely the complicated aggregations of constituents as 'emergence'. There is no definition of emergence. It assumes some kind of intactness obeying some kind of teleology when in fact its all driven from below, as retrospect shows.

> If it's not understandable in retrospect, its not going to be predictable either, so you are outside science there. <

You have a strange conception of science, and you clearly don't bother to read what I write. I said that what you think can be read retrospectively via evolution is wishful thinking, partly because it is too complex, partly because you need emergent properties to achieve that understanding. No magic involved.

> Emergence is just ignorance of retrospect as the tool to explain, in preference for amazement that things change when they aggregate. <

> The fact appears to be that what we call rational thought is just locally, normed talk (chit chat) and is entirely epiphenomenal and may be deceptive about actual brain processes controlling behavior. <

And I suppose this includes what you just wrote? You see how nonsensical that sort of thinking becomes if pushed to the extreme? It's one think to note that human beings can rationalize and confabulate, but if one goes so far as to say that there is no rational thought at all, then one is being both irrational and self-defeating.

Yes, writing is just post hoc verbal behavior that is social signaling based on local norms and carries little information value.

Information value is based on predicting the future. Math and data predict the future not natural language. In addition, it appears that writing/language is way after the fact of it's creation/triggering. No one has the time or energy to think about every word or action.

Again, if any of the so-called exceptional human qualities were useful we would find them in other species and they would have evolved far earlier.

Naive realism and local natural language usage and reification is self-defeating.

Ah, so the evo bio among us proposes that there are human behavioral abilities that are discontinuous with the hundreds of millions of years of evolution!

Pray tell what are these exceptional human behaviors and where did they originate genetically and ecologically?

Ah, so censorship, not because the ideas contradict your beliefs but because of "style." The same argument all repressive ideologies make "rudeness." Turns out "free thinkers" are only so inside the confines of their pet ideas. Where does your use of the term "bullshit" stand in your style requirement. Highly hypocritical -- but predictable.

You have no control of your reactive defensive/fear- based behavior -- best not to pretend otherwise. lol You fear new and different ideas but pretend it is social norms/politeness. Not true. Dare you to post this.

Fascinating discussion. It's a shame Harris didn't show! I predict you'd have extracted a lot of surprising concessions from him, suitably extricated from the dialectic with postmodernists and supernaturalists.

'Explanatory level' does seem to be a rather confused idea, conflating the very general idea of explanatory hierarchy (where some facts asymmetrically account for others; the existence of electrons helps us understand why broccoli exists, but not the other way around), of the restricted body of referents (and conceptualizations thereof) of a scientific discipline's theories, of size, etc. But we should be careful not to leave anything out when we cannibalize all the useful component ideas; there is also a hierarchy of precisification, where terms tend to have vaguer truth-conditions the further they get from physics. The mereological and developmental boundaries of a heart made of atoms include all the intrinsic vagueness of atoms, and then some.

You criticize the in-principle reducibility of all phenomena to physics as "a metaphysical promissory note that will never be cashed". But there are many reasonable positions in this neighborhood that we don't consider crazy; it is reasonable to suppose that all non-avian dinosaurs were made of atoms, even though we can't actually check each individual's composition to confirm this. The denial of strong emergence should not be treated as a dogma or scientific sine-qua-non; it should be treated as a tentative null hypothesis, and as a challenge to find a disconfirming case. Universal generalizations are much harder to prove than to disprove, which inclines me to place more of the burden of proof on the anti-reductionists, who have a vastly easier case to make if correct.

You report that Dennett argued that "If lower levels entail (logically) higher phenomena," we would not expect beings with different biochemical properties to converge upon the same large-scale behaviors. That's got it backwards; it's only if the higher phenomena (e.g., economics) entail the lower level (e.g., chemistry) that we should automatically doubt high-level convergence. Likewise, in to your criticism of Rosenberg's "the physical facts fix all the facts," you say that the multiple realizability of a macro-model by micro-models may make inferring the right micro-model impossible in some cases. But this only problematizes higher levels' ability to fix physics, not physics' ability to fix more general, complex, large-scale, or imprecise facts.

Finally, you suggest that the existence of boundary conditions entails a kind of emergence. Does this mean you reject Deacon's claim: "emergence is inherently diachronic, i.e., emergent properties are behaviors that did not appear up to a certain time in the history of the universe." ?